00:00:00.001 Started by upstream project "autotest-per-patch" build number 126260 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.096 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.128 Using shallow fetch with depth 1 00:00:00.128 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.128 > git --version # timeout=10 00:00:00.160 > git --version # 'git version 2.39.2' 00:00:00.160 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.370 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.382 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.393 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.393 > git config core.sparsecheckout # timeout=10 00:00:06.404 > git read-tree -mu HEAD # timeout=10 00:00:06.420 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.439 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.439 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.524 [Pipeline] Start of Pipeline 00:00:06.538 [Pipeline] library 00:00:06.539 Loading library shm_lib@master 00:00:06.540 Library shm_lib@master is cached. Copying from home. 00:00:06.554 [Pipeline] node 00:00:06.563 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:06.565 [Pipeline] { 00:00:06.573 [Pipeline] catchError 00:00:06.574 [Pipeline] { 00:00:06.586 [Pipeline] wrap 00:00:06.593 [Pipeline] { 00:00:06.599 [Pipeline] stage 00:00:06.600 [Pipeline] { (Prologue) 00:00:06.616 [Pipeline] echo 00:00:06.617 Node: VM-host-SM17 00:00:06.623 [Pipeline] cleanWs 00:00:06.631 [WS-CLEANUP] Deleting project workspace... 00:00:06.631 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.638 [WS-CLEANUP] done 00:00:06.839 [Pipeline] setCustomBuildProperty 00:00:06.907 [Pipeline] httpRequest 00:00:06.928 [Pipeline] echo 00:00:06.930 Sorcerer 10.211.164.101 is alive 00:00:06.939 [Pipeline] httpRequest 00:00:06.942 HttpMethod: GET 00:00:06.942 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.943 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.963 Response Code: HTTP/1.1 200 OK 00:00:06.963 Success: Status code 200 is in the accepted range: 200,404 00:00:06.964 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.855 [Pipeline] sh 00:00:12.144 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:12.162 [Pipeline] httpRequest 00:00:12.195 [Pipeline] echo 00:00:12.197 Sorcerer 10.211.164.101 is alive 00:00:12.208 [Pipeline] httpRequest 00:00:12.213 HttpMethod: GET 00:00:12.213 URL: http://10.211.164.101/packages/spdk_d608564df2dc354b5d29585f7dfab53d208dc1d0.tar.gz 00:00:12.214 Sending request to url: http://10.211.164.101/packages/spdk_d608564df2dc354b5d29585f7dfab53d208dc1d0.tar.gz 00:00:12.233 Response Code: HTTP/1.1 200 OK 00:00:12.234 Success: Status code 200 is in the accepted range: 200,404 00:00:12.234 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_d608564df2dc354b5d29585f7dfab53d208dc1d0.tar.gz 00:01:00.640 [Pipeline] sh 00:01:00.917 + tar --no-same-owner -xf spdk_d608564df2dc354b5d29585f7dfab53d208dc1d0.tar.gz 00:01:04.209 [Pipeline] sh 00:01:04.487 + git -C spdk log --oneline -n5 00:01:04.487 d608564df bdev/raid1: Support resize when increasing the size of base bdevs 00:01:04.487 60a8d0ce7 python/rpc: Prepare bdev.py for easy comparation 00:01:04.487 8feddada9 python/rpc: Unify parameters in all calls bdev.py 00:01:04.487 c96285725 accel: add fn to get accel driver name 00:01:04.487 2bc9d36b7 nvme/tcp: add sock tracepoint relation 00:01:04.505 [Pipeline] writeFile 00:01:04.521 [Pipeline] sh 00:01:04.799 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.810 [Pipeline] sh 00:01:05.103 + cat autorun-spdk.conf 00:01:05.103 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.103 SPDK_TEST_NVMF=1 00:01:05.103 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.103 SPDK_TEST_URING=1 00:01:05.103 SPDK_TEST_USDT=1 00:01:05.103 SPDK_RUN_UBSAN=1 00:01:05.103 NET_TYPE=virt 00:01:05.103 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.168 RUN_NIGHTLY=0 00:01:05.170 [Pipeline] } 00:01:05.188 [Pipeline] // stage 00:01:05.204 [Pipeline] stage 00:01:05.206 [Pipeline] { (Run VM) 00:01:05.221 [Pipeline] sh 00:01:05.500 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.500 + echo 'Start stage prepare_nvme.sh' 00:01:05.500 Start stage prepare_nvme.sh 00:01:05.500 + [[ -n 0 ]] 00:01:05.500 + disk_prefix=ex0 00:01:05.500 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:01:05.500 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:01:05.500 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:01:05.500 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.500 ++ SPDK_TEST_NVMF=1 00:01:05.500 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.500 ++ SPDK_TEST_URING=1 00:01:05.500 ++ SPDK_TEST_USDT=1 00:01:05.500 ++ SPDK_RUN_UBSAN=1 00:01:05.500 ++ NET_TYPE=virt 00:01:05.500 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.500 ++ RUN_NIGHTLY=0 00:01:05.500 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:05.500 + nvme_files=() 00:01:05.500 + declare -A nvme_files 00:01:05.500 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.500 + nvme_files['nvme.img']=5G 00:01:05.500 + nvme_files['nvme-cmb.img']=5G 00:01:05.500 + nvme_files['nvme-multi0.img']=4G 00:01:05.500 + nvme_files['nvme-multi1.img']=4G 00:01:05.500 + nvme_files['nvme-multi2.img']=4G 00:01:05.500 + nvme_files['nvme-openstack.img']=8G 00:01:05.500 + nvme_files['nvme-zns.img']=5G 00:01:05.500 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.500 + (( SPDK_TEST_FTL == 1 )) 00:01:05.500 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.500 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:05.500 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:05.500 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:05.500 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:05.500 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:05.500 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:05.500 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.500 + for nvme in "${!nvme_files[@]}" 00:01:05.500 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:06.066 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.066 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:06.066 + echo 'End stage prepare_nvme.sh' 00:01:06.066 End stage prepare_nvme.sh 00:01:06.079 [Pipeline] sh 00:01:06.358 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:06.358 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:01:06.358 00:01:06.358 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:01:06.358 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:01:06.358 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:06.358 HELP=0 00:01:06.358 DRY_RUN=0 00:01:06.358 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:06.358 NVME_DISKS_TYPE=nvme,nvme, 00:01:06.358 NVME_AUTO_CREATE=0 00:01:06.358 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:06.358 NVME_CMB=,, 00:01:06.358 NVME_PMR=,, 00:01:06.358 NVME_ZNS=,, 00:01:06.358 NVME_MS=,, 00:01:06.358 NVME_FDP=,, 00:01:06.358 SPDK_VAGRANT_DISTRO=fedora38 00:01:06.358 SPDK_VAGRANT_VMCPU=10 00:01:06.358 SPDK_VAGRANT_VMRAM=12288 00:01:06.358 SPDK_VAGRANT_PROVIDER=libvirt 00:01:06.358 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:06.358 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:06.358 SPDK_OPENSTACK_NETWORK=0 00:01:06.358 VAGRANT_PACKAGE_BOX=0 00:01:06.358 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:06.358 FORCE_DISTRO=true 00:01:06.358 VAGRANT_BOX_VERSION= 00:01:06.358 EXTRA_VAGRANTFILES= 00:01:06.358 NIC_MODEL=e1000 00:01:06.358 00:01:06.358 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:01:06.358 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:09.637 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.571 ==> default: Creating image (snapshot of base box volume). 00:01:10.571 ==> default: Creating domain with the following settings... 00:01:10.571 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721082685_d0e4b5b18aaafc42dbd9 00:01:10.571 ==> default: -- Domain type: kvm 00:01:10.571 ==> default: -- Cpus: 10 00:01:10.571 ==> default: -- Feature: acpi 00:01:10.571 ==> default: -- Feature: apic 00:01:10.571 ==> default: -- Feature: pae 00:01:10.571 ==> default: -- Memory: 12288M 00:01:10.571 ==> default: -- Memory Backing: hugepages: 00:01:10.571 ==> default: -- Management MAC: 00:01:10.571 ==> default: -- Loader: 00:01:10.571 ==> default: -- Nvram: 00:01:10.571 ==> default: -- Base box: spdk/fedora38 00:01:10.571 ==> default: -- Storage pool: default 00:01:10.571 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721082685_d0e4b5b18aaafc42dbd9.img (20G) 00:01:10.571 ==> default: -- Volume Cache: default 00:01:10.571 ==> default: -- Kernel: 00:01:10.571 ==> default: -- Initrd: 00:01:10.571 ==> default: -- Graphics Type: vnc 00:01:10.571 ==> default: -- Graphics Port: -1 00:01:10.571 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.571 ==> default: -- Graphics Password: Not defined 00:01:10.571 ==> default: -- Video Type: cirrus 00:01:10.571 ==> default: -- Video VRAM: 9216 00:01:10.571 ==> default: -- Sound Type: 00:01:10.571 ==> default: -- Keymap: en-us 00:01:10.571 ==> default: -- TPM Path: 00:01:10.571 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.571 ==> default: -- Command line args: 00:01:10.571 ==> default: -> value=-device, 00:01:10.571 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.571 ==> default: -> value=-drive, 00:01:10.571 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.571 ==> default: -> value=-device, 00:01:10.571 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.571 ==> default: -> value=-device, 00:01:10.571 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:10.571 ==> default: -> value=-drive, 00:01:10.571 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:10.571 ==> default: -> value=-device, 00:01:10.571 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.571 ==> default: -> value=-drive, 00:01:10.571 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:10.571 ==> default: -> value=-device, 00:01:10.571 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.571 ==> default: -> value=-drive, 00:01:10.571 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:10.571 ==> default: -> value=-device, 00:01:10.571 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.571 ==> default: Creating shared folders metadata... 00:01:10.571 ==> default: Starting domain. 00:01:12.470 ==> default: Waiting for domain to get an IP address... 00:01:30.544 ==> default: Waiting for SSH to become available... 00:01:30.544 ==> default: Configuring and enabling network interfaces... 00:01:34.731 default: SSH address: 192.168.121.15:22 00:01:34.731 default: SSH username: vagrant 00:01:34.731 default: SSH auth method: private key 00:01:36.667 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.781 ==> default: Mounting SSHFS shared folder... 00:01:45.717 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.717 ==> default: Checking Mount.. 00:01:47.091 ==> default: Folder Successfully Mounted! 00:01:47.091 ==> default: Running provisioner: file... 00:01:48.025 default: ~/.gitconfig => .gitconfig 00:01:48.284 00:01:48.284 SUCCESS! 00:01:48.284 00:01:48.284 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:48.284 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:48.284 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:48.284 00:01:48.292 [Pipeline] } 00:01:48.309 [Pipeline] // stage 00:01:48.317 [Pipeline] dir 00:01:48.317 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:01:48.318 [Pipeline] { 00:01:48.326 [Pipeline] catchError 00:01:48.327 [Pipeline] { 00:01:48.335 [Pipeline] sh 00:01:48.608 + vagrant ssh-config --host vagrant 00:01:48.608 + sed -ne /^Host/,$p 00:01:48.608 + tee ssh_conf 00:01:52.792 Host vagrant 00:01:52.792 HostName 192.168.121.15 00:01:52.792 User vagrant 00:01:52.792 Port 22 00:01:52.792 UserKnownHostsFile /dev/null 00:01:52.792 StrictHostKeyChecking no 00:01:52.792 PasswordAuthentication no 00:01:52.792 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:52.792 IdentitiesOnly yes 00:01:52.792 LogLevel FATAL 00:01:52.792 ForwardAgent yes 00:01:52.792 ForwardX11 yes 00:01:52.792 00:01:52.805 [Pipeline] withEnv 00:01:52.807 [Pipeline] { 00:01:52.824 [Pipeline] sh 00:01:53.107 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:53.107 source /etc/os-release 00:01:53.107 [[ -e /image.version ]] && img=$(< /image.version) 00:01:53.107 # Minimal, systemd-like check. 00:01:53.107 if [[ -e /.dockerenv ]]; then 00:01:53.107 # Clear garbage from the node's name: 00:01:53.107 # agt-er_autotest_547-896 -> autotest_547-896 00:01:53.107 # $HOSTNAME is the actual container id 00:01:53.107 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:53.107 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:53.107 # We can assume this is a mount from a host where container is running, 00:01:53.107 # so fetch its hostname to easily identify the target swarm worker. 00:01:53.107 container="$(< /etc/hostname) ($agent)" 00:01:53.107 else 00:01:53.107 # Fallback 00:01:53.107 container=$agent 00:01:53.107 fi 00:01:53.107 fi 00:01:53.107 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:53.107 00:01:53.374 [Pipeline] } 00:01:53.418 [Pipeline] // withEnv 00:01:53.426 [Pipeline] setCustomBuildProperty 00:01:53.441 [Pipeline] stage 00:01:53.444 [Pipeline] { (Tests) 00:01:53.464 [Pipeline] sh 00:01:53.745 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:53.760 [Pipeline] sh 00:01:54.040 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:54.316 [Pipeline] timeout 00:01:54.317 Timeout set to expire in 30 min 00:01:54.319 [Pipeline] { 00:01:54.338 [Pipeline] sh 00:01:54.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:55.185 HEAD is now at d608564df bdev/raid1: Support resize when increasing the size of base bdevs 00:01:55.199 [Pipeline] sh 00:01:55.535 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:55.551 [Pipeline] sh 00:01:55.830 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:56.105 [Pipeline] sh 00:01:56.382 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:56.641 ++ readlink -f spdk_repo 00:01:56.641 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:56.641 + [[ -n /home/vagrant/spdk_repo ]] 00:01:56.641 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:56.641 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:56.641 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:56.641 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:56.641 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:56.641 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:56.641 + cd /home/vagrant/spdk_repo 00:01:56.641 + source /etc/os-release 00:01:56.641 ++ NAME='Fedora Linux' 00:01:56.641 ++ VERSION='38 (Cloud Edition)' 00:01:56.641 ++ ID=fedora 00:01:56.641 ++ VERSION_ID=38 00:01:56.641 ++ VERSION_CODENAME= 00:01:56.641 ++ PLATFORM_ID=platform:f38 00:01:56.641 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:56.641 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:56.641 ++ LOGO=fedora-logo-icon 00:01:56.641 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:56.641 ++ HOME_URL=https://fedoraproject.org/ 00:01:56.641 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:56.641 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:56.641 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:56.641 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:56.641 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:56.641 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:56.641 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:56.641 ++ SUPPORT_END=2024-05-14 00:01:56.641 ++ VARIANT='Cloud Edition' 00:01:56.641 ++ VARIANT_ID=cloud 00:01:56.641 + uname -a 00:01:56.641 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:56.641 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:56.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:56.900 Hugepages 00:01:56.900 node hugesize free / total 00:01:56.900 node0 1048576kB 0 / 0 00:01:56.900 node0 2048kB 0 / 0 00:01:56.900 00:01:56.900 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.158 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:57.158 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:57.158 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:57.158 + rm -f /tmp/spdk-ld-path 00:01:57.158 + source autorun-spdk.conf 00:01:57.158 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.158 ++ SPDK_TEST_NVMF=1 00:01:57.158 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.158 ++ SPDK_TEST_URING=1 00:01:57.158 ++ SPDK_TEST_USDT=1 00:01:57.158 ++ SPDK_RUN_UBSAN=1 00:01:57.158 ++ NET_TYPE=virt 00:01:57.158 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.158 ++ RUN_NIGHTLY=0 00:01:57.158 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.158 + [[ -n '' ]] 00:01:57.158 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:57.158 + for M in /var/spdk/build-*-manifest.txt 00:01:57.158 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.158 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.158 + for M in /var/spdk/build-*-manifest.txt 00:01:57.158 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.158 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.158 ++ uname 00:01:57.158 + [[ Linux == \L\i\n\u\x ]] 00:01:57.158 + sudo dmesg -T 00:01:57.158 + sudo dmesg --clear 00:01:57.158 + dmesg_pid=5099 00:01:57.158 + sudo dmesg -Tw 00:01:57.158 + [[ Fedora Linux == FreeBSD ]] 00:01:57.158 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.158 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.158 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.158 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.158 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.158 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.158 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.158 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.158 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.158 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.158 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.158 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.158 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.158 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.158 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:57.159 Test configuration: 00:01:57.159 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.159 SPDK_TEST_NVMF=1 00:01:57.159 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.159 SPDK_TEST_URING=1 00:01:57.159 SPDK_TEST_USDT=1 00:01:57.159 SPDK_RUN_UBSAN=1 00:01:57.159 NET_TYPE=virt 00:01:57.159 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.159 RUN_NIGHTLY=0 22:32:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:57.417 22:32:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.417 22:32:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.417 22:32:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.417 22:32:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.417 22:32:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.417 22:32:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.417 22:32:12 -- paths/export.sh@5 -- $ export PATH 00:01:57.417 22:32:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.417 22:32:12 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:57.417 22:32:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:57.417 22:32:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082732.XXXXXX 00:01:57.417 22:32:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082732.GjFhb5 00:01:57.417 22:32:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:57.417 22:32:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:57.417 22:32:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:57.417 22:32:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:57.417 22:32:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.417 22:32:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:57.417 22:32:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:57.417 22:32:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.417 22:32:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:57.417 22:32:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:57.417 22:32:12 -- pm/common@17 -- $ local monitor 00:01:57.417 22:32:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.417 22:32:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.417 22:32:12 -- pm/common@25 -- $ sleep 1 00:01:57.417 22:32:12 -- pm/common@21 -- $ date +%s 00:01:57.417 22:32:12 -- pm/common@21 -- $ date +%s 00:01:57.417 22:32:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721082732 00:01:57.417 22:32:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721082732 00:01:57.417 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721082732_collect-vmstat.pm.log 00:01:57.417 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721082732_collect-cpu-load.pm.log 00:01:58.352 22:32:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:58.352 22:32:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.353 22:32:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.353 22:32:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.353 22:32:13 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.353 Mon Jul 15 10:32:13 PM UTC 2024 00:01:58.353 22:32:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.353 v24.09-pre-170-gd608564df 00:01:58.353 22:32:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:58.353 22:32:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:58.353 22:32:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:58.353 22:32:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:58.353 22:32:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:58.353 22:32:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.353 ************************************ 00:01:58.353 START TEST ubsan 00:01:58.353 ************************************ 00:01:58.353 using ubsan 00:01:58.353 22:32:13 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:58.353 00:01:58.353 real 0m0.000s 00:01:58.353 user 0m0.000s 00:01:58.353 sys 0m0.000s 00:01:58.353 22:32:13 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:58.353 22:32:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:58.353 ************************************ 00:01:58.353 END TEST ubsan 00:01:58.353 ************************************ 00:01:58.353 22:32:13 -- common/autotest_common.sh@1142 -- $ return 0 00:01:58.353 22:32:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:58.353 22:32:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:58.353 22:32:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:58.353 22:32:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:58.353 22:32:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:58.353 22:32:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:58.353 22:32:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:58.353 22:32:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:58.353 22:32:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:58.611 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:58.611 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:58.870 Using 'verbs' RDMA provider 00:02:14.734 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:26.944 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:26.944 Creating mk/config.mk...done. 00:02:26.944 Creating mk/cc.flags.mk...done. 00:02:26.944 Type 'make' to build. 00:02:26.944 22:32:41 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:26.944 22:32:41 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:26.944 22:32:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:26.944 22:32:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.944 ************************************ 00:02:26.944 START TEST make 00:02:26.944 ************************************ 00:02:26.944 22:32:41 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:26.944 make[1]: Nothing to be done for 'all'. 00:02:39.159 The Meson build system 00:02:39.159 Version: 1.3.1 00:02:39.159 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:39.159 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:39.159 Build type: native build 00:02:39.159 Program cat found: YES (/usr/bin/cat) 00:02:39.159 Project name: DPDK 00:02:39.159 Project version: 24.03.0 00:02:39.159 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:39.159 C linker for the host machine: cc ld.bfd 2.39-16 00:02:39.159 Host machine cpu family: x86_64 00:02:39.159 Host machine cpu: x86_64 00:02:39.159 Message: ## Building in Developer Mode ## 00:02:39.159 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.159 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:39.159 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.159 Program python3 found: YES (/usr/bin/python3) 00:02:39.159 Program cat found: YES (/usr/bin/cat) 00:02:39.159 Compiler for C supports arguments -march=native: YES 00:02:39.159 Checking for size of "void *" : 8 00:02:39.159 Checking for size of "void *" : 8 (cached) 00:02:39.159 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:39.159 Library m found: YES 00:02:39.159 Library numa found: YES 00:02:39.159 Has header "numaif.h" : YES 00:02:39.159 Library fdt found: NO 00:02:39.159 Library execinfo found: NO 00:02:39.159 Has header "execinfo.h" : YES 00:02:39.159 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:39.159 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.159 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.159 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.159 Run-time dependency openssl found: YES 3.0.9 00:02:39.159 Run-time dependency libpcap found: YES 1.10.4 00:02:39.159 Has header "pcap.h" with dependency libpcap: YES 00:02:39.159 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.159 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.159 Compiler for C supports arguments -Wformat: YES 00:02:39.159 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.159 Compiler for C supports arguments -Wformat-security: NO 00:02:39.159 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.159 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.159 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.159 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.159 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.159 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.159 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.159 Compiler for C supports arguments -Wundef: YES 00:02:39.159 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.159 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.159 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.159 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.159 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.159 Program objdump found: YES (/usr/bin/objdump) 00:02:39.159 Compiler for C supports arguments -mavx512f: YES 00:02:39.159 Checking if "AVX512 checking" compiles: YES 00:02:39.159 Fetching value of define "__SSE4_2__" : 1 00:02:39.159 Fetching value of define "__AES__" : 1 00:02:39.159 Fetching value of define "__AVX__" : 1 00:02:39.159 Fetching value of define "__AVX2__" : 1 00:02:39.159 Fetching value of define "__AVX512BW__" : (undefined) 00:02:39.159 Fetching value of define "__AVX512CD__" : (undefined) 00:02:39.159 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:39.159 Fetching value of define "__AVX512F__" : (undefined) 00:02:39.159 Fetching value of define "__AVX512VL__" : (undefined) 00:02:39.159 Fetching value of define "__PCLMUL__" : 1 00:02:39.159 Fetching value of define "__RDRND__" : 1 00:02:39.159 Fetching value of define "__RDSEED__" : 1 00:02:39.159 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:39.159 Fetching value of define "__znver1__" : (undefined) 00:02:39.159 Fetching value of define "__znver2__" : (undefined) 00:02:39.159 Fetching value of define "__znver3__" : (undefined) 00:02:39.159 Fetching value of define "__znver4__" : (undefined) 00:02:39.159 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.159 Message: lib/log: Defining dependency "log" 00:02:39.159 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.159 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.159 Checking for function "getentropy" : NO 00:02:39.159 Message: lib/eal: Defining dependency "eal" 00:02:39.159 Message: lib/ring: Defining dependency "ring" 00:02:39.159 Message: lib/rcu: Defining dependency "rcu" 00:02:39.159 Message: lib/mempool: Defining dependency "mempool" 00:02:39.159 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.159 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.159 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.159 Compiler for C supports arguments -mpclmul: YES 00:02:39.160 Compiler for C supports arguments -maes: YES 00:02:39.160 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.160 Compiler for C supports arguments -mavx512bw: YES 00:02:39.160 Compiler for C supports arguments -mavx512dq: YES 00:02:39.160 Compiler for C supports arguments -mavx512vl: YES 00:02:39.160 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.160 Compiler for C supports arguments -mavx2: YES 00:02:39.160 Compiler for C supports arguments -mavx: YES 00:02:39.160 Message: lib/net: Defining dependency "net" 00:02:39.160 Message: lib/meter: Defining dependency "meter" 00:02:39.160 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.160 Message: lib/pci: Defining dependency "pci" 00:02:39.160 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.160 Message: lib/hash: Defining dependency "hash" 00:02:39.160 Message: lib/timer: Defining dependency "timer" 00:02:39.160 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.160 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.160 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.160 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.160 Message: lib/power: Defining dependency "power" 00:02:39.160 Message: lib/reorder: Defining dependency "reorder" 00:02:39.160 Message: lib/security: Defining dependency "security" 00:02:39.160 Has header "linux/userfaultfd.h" : YES 00:02:39.160 Has header "linux/vduse.h" : YES 00:02:39.160 Message: lib/vhost: Defining dependency "vhost" 00:02:39.160 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.160 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.160 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.160 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.160 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:39.160 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:39.160 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:39.160 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:39.160 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:39.160 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:39.160 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.160 Configuring doxy-api-html.conf using configuration 00:02:39.160 Configuring doxy-api-man.conf using configuration 00:02:39.160 Program mandb found: YES (/usr/bin/mandb) 00:02:39.160 Program sphinx-build found: NO 00:02:39.160 Configuring rte_build_config.h using configuration 00:02:39.160 Message: 00:02:39.160 ================= 00:02:39.160 Applications Enabled 00:02:39.160 ================= 00:02:39.160 00:02:39.160 apps: 00:02:39.160 00:02:39.160 00:02:39.160 Message: 00:02:39.160 ================= 00:02:39.160 Libraries Enabled 00:02:39.160 ================= 00:02:39.160 00:02:39.160 libs: 00:02:39.160 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.160 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:39.160 cryptodev, dmadev, power, reorder, security, vhost, 00:02:39.160 00:02:39.160 Message: 00:02:39.160 =============== 00:02:39.160 Drivers Enabled 00:02:39.160 =============== 00:02:39.160 00:02:39.160 common: 00:02:39.160 00:02:39.160 bus: 00:02:39.160 pci, vdev, 00:02:39.160 mempool: 00:02:39.160 ring, 00:02:39.160 dma: 00:02:39.160 00:02:39.160 net: 00:02:39.160 00:02:39.160 crypto: 00:02:39.160 00:02:39.160 compress: 00:02:39.160 00:02:39.160 vdpa: 00:02:39.160 00:02:39.160 00:02:39.160 Message: 00:02:39.160 ================= 00:02:39.160 Content Skipped 00:02:39.160 ================= 00:02:39.160 00:02:39.160 apps: 00:02:39.160 dumpcap: explicitly disabled via build config 00:02:39.160 graph: explicitly disabled via build config 00:02:39.160 pdump: explicitly disabled via build config 00:02:39.160 proc-info: explicitly disabled via build config 00:02:39.160 test-acl: explicitly disabled via build config 00:02:39.160 test-bbdev: explicitly disabled via build config 00:02:39.160 test-cmdline: explicitly disabled via build config 00:02:39.160 test-compress-perf: explicitly disabled via build config 00:02:39.160 test-crypto-perf: explicitly disabled via build config 00:02:39.160 test-dma-perf: explicitly disabled via build config 00:02:39.160 test-eventdev: explicitly disabled via build config 00:02:39.160 test-fib: explicitly disabled via build config 00:02:39.160 test-flow-perf: explicitly disabled via build config 00:02:39.160 test-gpudev: explicitly disabled via build config 00:02:39.160 test-mldev: explicitly disabled via build config 00:02:39.160 test-pipeline: explicitly disabled via build config 00:02:39.160 test-pmd: explicitly disabled via build config 00:02:39.160 test-regex: explicitly disabled via build config 00:02:39.160 test-sad: explicitly disabled via build config 00:02:39.160 test-security-perf: explicitly disabled via build config 00:02:39.160 00:02:39.160 libs: 00:02:39.160 argparse: explicitly disabled via build config 00:02:39.160 metrics: explicitly disabled via build config 00:02:39.160 acl: explicitly disabled via build config 00:02:39.160 bbdev: explicitly disabled via build config 00:02:39.160 bitratestats: explicitly disabled via build config 00:02:39.160 bpf: explicitly disabled via build config 00:02:39.160 cfgfile: explicitly disabled via build config 00:02:39.160 distributor: explicitly disabled via build config 00:02:39.160 efd: explicitly disabled via build config 00:02:39.160 eventdev: explicitly disabled via build config 00:02:39.160 dispatcher: explicitly disabled via build config 00:02:39.160 gpudev: explicitly disabled via build config 00:02:39.160 gro: explicitly disabled via build config 00:02:39.160 gso: explicitly disabled via build config 00:02:39.160 ip_frag: explicitly disabled via build config 00:02:39.160 jobstats: explicitly disabled via build config 00:02:39.160 latencystats: explicitly disabled via build config 00:02:39.160 lpm: explicitly disabled via build config 00:02:39.160 member: explicitly disabled via build config 00:02:39.160 pcapng: explicitly disabled via build config 00:02:39.160 rawdev: explicitly disabled via build config 00:02:39.160 regexdev: explicitly disabled via build config 00:02:39.160 mldev: explicitly disabled via build config 00:02:39.160 rib: explicitly disabled via build config 00:02:39.160 sched: explicitly disabled via build config 00:02:39.160 stack: explicitly disabled via build config 00:02:39.160 ipsec: explicitly disabled via build config 00:02:39.160 pdcp: explicitly disabled via build config 00:02:39.160 fib: explicitly disabled via build config 00:02:39.160 port: explicitly disabled via build config 00:02:39.160 pdump: explicitly disabled via build config 00:02:39.160 table: explicitly disabled via build config 00:02:39.160 pipeline: explicitly disabled via build config 00:02:39.160 graph: explicitly disabled via build config 00:02:39.160 node: explicitly disabled via build config 00:02:39.160 00:02:39.160 drivers: 00:02:39.160 common/cpt: not in enabled drivers build config 00:02:39.160 common/dpaax: not in enabled drivers build config 00:02:39.160 common/iavf: not in enabled drivers build config 00:02:39.160 common/idpf: not in enabled drivers build config 00:02:39.160 common/ionic: not in enabled drivers build config 00:02:39.160 common/mvep: not in enabled drivers build config 00:02:39.160 common/octeontx: not in enabled drivers build config 00:02:39.160 bus/auxiliary: not in enabled drivers build config 00:02:39.160 bus/cdx: not in enabled drivers build config 00:02:39.160 bus/dpaa: not in enabled drivers build config 00:02:39.160 bus/fslmc: not in enabled drivers build config 00:02:39.160 bus/ifpga: not in enabled drivers build config 00:02:39.160 bus/platform: not in enabled drivers build config 00:02:39.160 bus/uacce: not in enabled drivers build config 00:02:39.160 bus/vmbus: not in enabled drivers build config 00:02:39.160 common/cnxk: not in enabled drivers build config 00:02:39.160 common/mlx5: not in enabled drivers build config 00:02:39.160 common/nfp: not in enabled drivers build config 00:02:39.160 common/nitrox: not in enabled drivers build config 00:02:39.160 common/qat: not in enabled drivers build config 00:02:39.160 common/sfc_efx: not in enabled drivers build config 00:02:39.160 mempool/bucket: not in enabled drivers build config 00:02:39.160 mempool/cnxk: not in enabled drivers build config 00:02:39.160 mempool/dpaa: not in enabled drivers build config 00:02:39.160 mempool/dpaa2: not in enabled drivers build config 00:02:39.160 mempool/octeontx: not in enabled drivers build config 00:02:39.160 mempool/stack: not in enabled drivers build config 00:02:39.160 dma/cnxk: not in enabled drivers build config 00:02:39.160 dma/dpaa: not in enabled drivers build config 00:02:39.160 dma/dpaa2: not in enabled drivers build config 00:02:39.160 dma/hisilicon: not in enabled drivers build config 00:02:39.160 dma/idxd: not in enabled drivers build config 00:02:39.160 dma/ioat: not in enabled drivers build config 00:02:39.160 dma/skeleton: not in enabled drivers build config 00:02:39.160 net/af_packet: not in enabled drivers build config 00:02:39.160 net/af_xdp: not in enabled drivers build config 00:02:39.160 net/ark: not in enabled drivers build config 00:02:39.160 net/atlantic: not in enabled drivers build config 00:02:39.160 net/avp: not in enabled drivers build config 00:02:39.160 net/axgbe: not in enabled drivers build config 00:02:39.160 net/bnx2x: not in enabled drivers build config 00:02:39.160 net/bnxt: not in enabled drivers build config 00:02:39.160 net/bonding: not in enabled drivers build config 00:02:39.160 net/cnxk: not in enabled drivers build config 00:02:39.160 net/cpfl: not in enabled drivers build config 00:02:39.160 net/cxgbe: not in enabled drivers build config 00:02:39.160 net/dpaa: not in enabled drivers build config 00:02:39.160 net/dpaa2: not in enabled drivers build config 00:02:39.160 net/e1000: not in enabled drivers build config 00:02:39.160 net/ena: not in enabled drivers build config 00:02:39.160 net/enetc: not in enabled drivers build config 00:02:39.160 net/enetfec: not in enabled drivers build config 00:02:39.160 net/enic: not in enabled drivers build config 00:02:39.160 net/failsafe: not in enabled drivers build config 00:02:39.160 net/fm10k: not in enabled drivers build config 00:02:39.160 net/gve: not in enabled drivers build config 00:02:39.160 net/hinic: not in enabled drivers build config 00:02:39.160 net/hns3: not in enabled drivers build config 00:02:39.160 net/i40e: not in enabled drivers build config 00:02:39.160 net/iavf: not in enabled drivers build config 00:02:39.160 net/ice: not in enabled drivers build config 00:02:39.161 net/idpf: not in enabled drivers build config 00:02:39.161 net/igc: not in enabled drivers build config 00:02:39.161 net/ionic: not in enabled drivers build config 00:02:39.161 net/ipn3ke: not in enabled drivers build config 00:02:39.161 net/ixgbe: not in enabled drivers build config 00:02:39.161 net/mana: not in enabled drivers build config 00:02:39.161 net/memif: not in enabled drivers build config 00:02:39.161 net/mlx4: not in enabled drivers build config 00:02:39.161 net/mlx5: not in enabled drivers build config 00:02:39.161 net/mvneta: not in enabled drivers build config 00:02:39.161 net/mvpp2: not in enabled drivers build config 00:02:39.161 net/netvsc: not in enabled drivers build config 00:02:39.161 net/nfb: not in enabled drivers build config 00:02:39.161 net/nfp: not in enabled drivers build config 00:02:39.161 net/ngbe: not in enabled drivers build config 00:02:39.161 net/null: not in enabled drivers build config 00:02:39.161 net/octeontx: not in enabled drivers build config 00:02:39.161 net/octeon_ep: not in enabled drivers build config 00:02:39.161 net/pcap: not in enabled drivers build config 00:02:39.161 net/pfe: not in enabled drivers build config 00:02:39.161 net/qede: not in enabled drivers build config 00:02:39.161 net/ring: not in enabled drivers build config 00:02:39.161 net/sfc: not in enabled drivers build config 00:02:39.161 net/softnic: not in enabled drivers build config 00:02:39.161 net/tap: not in enabled drivers build config 00:02:39.161 net/thunderx: not in enabled drivers build config 00:02:39.161 net/txgbe: not in enabled drivers build config 00:02:39.161 net/vdev_netvsc: not in enabled drivers build config 00:02:39.161 net/vhost: not in enabled drivers build config 00:02:39.161 net/virtio: not in enabled drivers build config 00:02:39.161 net/vmxnet3: not in enabled drivers build config 00:02:39.161 raw/*: missing internal dependency, "rawdev" 00:02:39.161 crypto/armv8: not in enabled drivers build config 00:02:39.161 crypto/bcmfs: not in enabled drivers build config 00:02:39.161 crypto/caam_jr: not in enabled drivers build config 00:02:39.161 crypto/ccp: not in enabled drivers build config 00:02:39.161 crypto/cnxk: not in enabled drivers build config 00:02:39.161 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.161 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.161 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.161 crypto/mlx5: not in enabled drivers build config 00:02:39.161 crypto/mvsam: not in enabled drivers build config 00:02:39.161 crypto/nitrox: not in enabled drivers build config 00:02:39.161 crypto/null: not in enabled drivers build config 00:02:39.161 crypto/octeontx: not in enabled drivers build config 00:02:39.161 crypto/openssl: not in enabled drivers build config 00:02:39.161 crypto/scheduler: not in enabled drivers build config 00:02:39.161 crypto/uadk: not in enabled drivers build config 00:02:39.161 crypto/virtio: not in enabled drivers build config 00:02:39.161 compress/isal: not in enabled drivers build config 00:02:39.161 compress/mlx5: not in enabled drivers build config 00:02:39.161 compress/nitrox: not in enabled drivers build config 00:02:39.161 compress/octeontx: not in enabled drivers build config 00:02:39.161 compress/zlib: not in enabled drivers build config 00:02:39.161 regex/*: missing internal dependency, "regexdev" 00:02:39.161 ml/*: missing internal dependency, "mldev" 00:02:39.161 vdpa/ifc: not in enabled drivers build config 00:02:39.161 vdpa/mlx5: not in enabled drivers build config 00:02:39.161 vdpa/nfp: not in enabled drivers build config 00:02:39.161 vdpa/sfc: not in enabled drivers build config 00:02:39.161 event/*: missing internal dependency, "eventdev" 00:02:39.161 baseband/*: missing internal dependency, "bbdev" 00:02:39.161 gpu/*: missing internal dependency, "gpudev" 00:02:39.161 00:02:39.161 00:02:39.161 Build targets in project: 85 00:02:39.161 00:02:39.161 DPDK 24.03.0 00:02:39.161 00:02:39.161 User defined options 00:02:39.161 buildtype : debug 00:02:39.161 default_library : shared 00:02:39.161 libdir : lib 00:02:39.161 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:39.161 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:39.161 c_link_args : 00:02:39.161 cpu_instruction_set: native 00:02:39.161 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:39.161 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:39.161 enable_docs : false 00:02:39.161 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:39.161 enable_kmods : false 00:02:39.161 tests : false 00:02:39.161 00:02:39.161 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.161 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:39.161 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.161 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.161 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.161 [4/268] Linking static target lib/librte_kvargs.a 00:02:39.161 [5/268] Linking static target lib/librte_log.a 00:02:39.161 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.161 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.161 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.161 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.161 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.161 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.161 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.161 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.161 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.161 [15/268] Linking static target lib/librte_telemetry.a 00:02:39.161 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.161 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.421 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.421 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.421 [20/268] Linking target lib/librte_log.so.24.1 00:02:39.680 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:39.680 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:39.938 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.938 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.938 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:39.938 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.938 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.196 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.196 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.196 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.196 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:40.196 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.196 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.196 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.455 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.455 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.455 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.714 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.972 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.972 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.972 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.972 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.972 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.972 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.972 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.972 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.230 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.230 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.230 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.489 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.748 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.748 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.748 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.008 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.008 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.008 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.008 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.267 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.267 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.267 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.526 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.526 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.785 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.785 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.785 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.785 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.785 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.785 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.096 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.369 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.369 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.369 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.369 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.627 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.627 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.627 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.627 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.627 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.887 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.887 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.146 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.146 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.405 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.405 [84/268] Linking static target lib/librte_ring.a 00:02:44.405 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.405 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.405 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.405 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.405 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.405 [90/268] Linking static target lib/librte_rcu.a 00:02:44.405 [91/268] Linking static target lib/librte_eal.a 00:02:44.405 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.664 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.664 [94/268] Linking static target lib/librte_mempool.a 00:02:44.922 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.922 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.922 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.922 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.922 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.922 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.922 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.181 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.181 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.181 [104/268] Linking static target lib/librte_mbuf.a 00:02:45.440 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.440 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.699 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.699 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.699 [109/268] Linking static target lib/librte_net.a 00:02:45.699 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.699 [111/268] Linking static target lib/librte_meter.a 00:02:45.958 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.958 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.217 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.217 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.217 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.217 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.217 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.476 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.734 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.992 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.992 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.250 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.250 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.250 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.250 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.250 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.509 [128/268] Linking static target lib/librte_pci.a 00:02:47.509 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:47.509 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.509 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.509 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.767 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.767 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.767 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.767 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.767 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.767 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.767 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.767 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.767 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.767 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.767 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.767 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:47.767 [145/268] Linking static target lib/librte_ethdev.a 00:02:48.027 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:48.027 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.284 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.284 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.284 [150/268] Linking static target lib/librte_cmdline.a 00:02:48.284 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.542 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.542 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.542 [154/268] Linking static target lib/librte_timer.a 00:02:48.542 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.799 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:48.799 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.799 [158/268] Linking static target lib/librte_hash.a 00:02:49.058 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.058 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.058 [161/268] Linking static target lib/librte_compressdev.a 00:02:49.315 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.315 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.315 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.315 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.572 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:49.829 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.829 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.829 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.829 [170/268] Linking static target lib/librte_dmadev.a 00:02:49.829 [171/268] Linking static target lib/librte_cryptodev.a 00:02:49.829 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.087 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.087 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.087 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:50.087 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.087 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.346 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:50.604 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.604 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:50.604 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:50.604 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.604 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.862 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.862 [185/268] Linking static target lib/librte_power.a 00:02:50.862 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.120 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.120 [188/268] Linking static target lib/librte_reorder.a 00:02:51.379 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.379 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.379 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.379 [192/268] Linking static target lib/librte_security.a 00:02:51.379 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.638 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.638 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.638 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.896 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.896 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.896 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.155 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.155 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.461 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.461 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.461 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.461 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.461 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.719 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.719 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.719 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.719 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.719 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.978 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.978 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.978 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.978 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.978 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.978 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.978 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.978 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:52.978 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.978 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.978 [222/268] Linking static target drivers/librte_bus_vdev.a 00:02:53.237 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.237 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.237 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.237 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:53.237 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.496 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.432 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.432 [230/268] Linking static target lib/librte_vhost.a 00:02:54.999 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.000 [232/268] Linking target lib/librte_eal.so.24.1 00:02:55.258 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:55.258 [234/268] Linking target lib/librte_timer.so.24.1 00:02:55.258 [235/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.258 [236/268] Linking target lib/librte_pci.so.24.1 00:02:55.258 [237/268] Linking target lib/librte_ring.so.24.1 00:02:55.258 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:55.258 [239/268] Linking target lib/librte_meter.so.24.1 00:02:55.258 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:55.516 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.516 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.516 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.516 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:55.516 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.516 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:55.516 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:55.516 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.516 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.516 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:55.516 [251/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.773 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.773 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:55.773 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.773 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:55.773 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:56.033 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:56.033 [258/268] Linking target lib/librte_net.so.24.1 00:02:56.033 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:56.033 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:56.033 [261/268] Linking target lib/librte_hash.so.24.1 00:02:56.033 [262/268] Linking target lib/librte_security.so.24.1 00:02:56.033 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:56.033 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:56.291 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.291 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.291 [267/268] Linking target lib/librte_power.so.24.1 00:02:56.291 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:56.291 INFO: autodetecting backend as ninja 00:02:56.291 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.708 CC lib/log/log.o 00:02:57.708 CC lib/log/log_flags.o 00:02:57.708 CC lib/log/log_deprecated.o 00:02:57.708 CC lib/ut/ut.o 00:02:57.708 CC lib/ut_mock/mock.o 00:02:57.708 LIB libspdk_log.a 00:02:57.708 LIB libspdk_ut.a 00:02:57.708 SO libspdk_log.so.7.0 00:02:57.708 LIB libspdk_ut_mock.a 00:02:57.966 SO libspdk_ut.so.2.0 00:02:57.966 SO libspdk_ut_mock.so.6.0 00:02:57.966 SYMLINK libspdk_log.so 00:02:57.966 SYMLINK libspdk_ut.so 00:02:57.966 SYMLINK libspdk_ut_mock.so 00:02:58.223 CC lib/util/base64.o 00:02:58.223 CC lib/util/cpuset.o 00:02:58.223 CXX lib/trace_parser/trace.o 00:02:58.223 CC lib/util/bit_array.o 00:02:58.223 CC lib/util/crc16.o 00:02:58.223 CC lib/util/crc32.o 00:02:58.223 CC lib/util/crc32c.o 00:02:58.223 CC lib/ioat/ioat.o 00:02:58.223 CC lib/dma/dma.o 00:02:58.223 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.223 CC lib/util/crc32_ieee.o 00:02:58.223 CC lib/vfio_user/host/vfio_user.o 00:02:58.223 CC lib/util/crc64.o 00:02:58.223 CC lib/util/dif.o 00:02:58.481 CC lib/util/fd.o 00:02:58.481 LIB libspdk_dma.a 00:02:58.481 CC lib/util/file.o 00:02:58.481 SO libspdk_dma.so.4.0 00:02:58.481 CC lib/util/hexlify.o 00:02:58.481 CC lib/util/iov.o 00:02:58.481 LIB libspdk_ioat.a 00:02:58.481 CC lib/util/math.o 00:02:58.481 SYMLINK libspdk_dma.so 00:02:58.481 CC lib/util/pipe.o 00:02:58.481 SO libspdk_ioat.so.7.0 00:02:58.481 CC lib/util/strerror_tls.o 00:02:58.481 CC lib/util/string.o 00:02:58.481 SYMLINK libspdk_ioat.so 00:02:58.481 LIB libspdk_vfio_user.a 00:02:58.481 CC lib/util/uuid.o 00:02:58.738 CC lib/util/fd_group.o 00:02:58.738 SO libspdk_vfio_user.so.5.0 00:02:58.738 CC lib/util/xor.o 00:02:58.738 CC lib/util/zipf.o 00:02:58.738 SYMLINK libspdk_vfio_user.so 00:02:58.997 LIB libspdk_util.a 00:02:58.997 SO libspdk_util.so.9.1 00:02:59.256 LIB libspdk_trace_parser.a 00:02:59.256 SYMLINK libspdk_util.so 00:02:59.256 SO libspdk_trace_parser.so.5.0 00:02:59.256 SYMLINK libspdk_trace_parser.so 00:02:59.256 CC lib/conf/conf.o 00:02:59.256 CC lib/rdma_provider/common.o 00:02:59.256 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:59.514 CC lib/env_dpdk/env.o 00:02:59.514 CC lib/env_dpdk/memory.o 00:02:59.514 CC lib/vmd/vmd.o 00:02:59.514 CC lib/vmd/led.o 00:02:59.514 CC lib/idxd/idxd.o 00:02:59.514 CC lib/rdma_utils/rdma_utils.o 00:02:59.514 CC lib/json/json_parse.o 00:02:59.514 CC lib/json/json_util.o 00:02:59.514 CC lib/idxd/idxd_user.o 00:02:59.514 LIB libspdk_conf.a 00:02:59.793 CC lib/json/json_write.o 00:02:59.793 LIB libspdk_rdma_provider.a 00:02:59.793 LIB libspdk_rdma_utils.a 00:02:59.793 SO libspdk_conf.so.6.0 00:02:59.793 SO libspdk_rdma_utils.so.1.0 00:02:59.793 SO libspdk_rdma_provider.so.6.0 00:02:59.793 SYMLINK libspdk_conf.so 00:02:59.793 CC lib/env_dpdk/pci.o 00:02:59.793 SYMLINK libspdk_rdma_utils.so 00:02:59.793 CC lib/env_dpdk/init.o 00:02:59.793 SYMLINK libspdk_rdma_provider.so 00:02:59.793 CC lib/env_dpdk/threads.o 00:02:59.793 CC lib/env_dpdk/pci_ioat.o 00:02:59.793 CC lib/idxd/idxd_kernel.o 00:03:00.052 LIB libspdk_json.a 00:03:00.052 CC lib/env_dpdk/pci_virtio.o 00:03:00.052 CC lib/env_dpdk/pci_vmd.o 00:03:00.052 CC lib/env_dpdk/pci_idxd.o 00:03:00.052 SO libspdk_json.so.6.0 00:03:00.052 LIB libspdk_idxd.a 00:03:00.052 LIB libspdk_vmd.a 00:03:00.052 SO libspdk_vmd.so.6.0 00:03:00.052 SO libspdk_idxd.so.12.0 00:03:00.052 SYMLINK libspdk_json.so 00:03:00.052 CC lib/env_dpdk/pci_event.o 00:03:00.052 CC lib/env_dpdk/sigbus_handler.o 00:03:00.052 SYMLINK libspdk_vmd.so 00:03:00.052 CC lib/env_dpdk/pci_dpdk.o 00:03:00.052 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.052 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.052 SYMLINK libspdk_idxd.so 00:03:00.310 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.310 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.310 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.310 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.568 LIB libspdk_jsonrpc.a 00:03:00.568 SO libspdk_jsonrpc.so.6.0 00:03:00.568 SYMLINK libspdk_jsonrpc.so 00:03:00.826 LIB libspdk_env_dpdk.a 00:03:00.826 CC lib/rpc/rpc.o 00:03:00.826 SO libspdk_env_dpdk.so.14.1 00:03:01.091 SYMLINK libspdk_env_dpdk.so 00:03:01.091 LIB libspdk_rpc.a 00:03:01.091 SO libspdk_rpc.so.6.0 00:03:01.391 SYMLINK libspdk_rpc.so 00:03:01.391 CC lib/trace/trace.o 00:03:01.391 CC lib/trace/trace_flags.o 00:03:01.391 CC lib/trace/trace_rpc.o 00:03:01.391 CC lib/keyring/keyring.o 00:03:01.391 CC lib/keyring/keyring_rpc.o 00:03:01.391 CC lib/notify/notify.o 00:03:01.391 CC lib/notify/notify_rpc.o 00:03:01.650 LIB libspdk_notify.a 00:03:01.650 LIB libspdk_keyring.a 00:03:01.650 SO libspdk_notify.so.6.0 00:03:01.650 LIB libspdk_trace.a 00:03:01.650 SO libspdk_keyring.so.1.0 00:03:01.650 SYMLINK libspdk_notify.so 00:03:01.650 SO libspdk_trace.so.10.0 00:03:01.909 SYMLINK libspdk_keyring.so 00:03:01.909 SYMLINK libspdk_trace.so 00:03:02.168 CC lib/thread/thread.o 00:03:02.168 CC lib/thread/iobuf.o 00:03:02.168 CC lib/sock/sock.o 00:03:02.168 CC lib/sock/sock_rpc.o 00:03:02.736 LIB libspdk_sock.a 00:03:02.736 SO libspdk_sock.so.10.0 00:03:02.736 SYMLINK libspdk_sock.so 00:03:02.994 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.994 CC lib/nvme/nvme_ctrlr.o 00:03:02.994 CC lib/nvme/nvme_fabric.o 00:03:02.994 CC lib/nvme/nvme_ns_cmd.o 00:03:02.994 CC lib/nvme/nvme_ns.o 00:03:02.994 CC lib/nvme/nvme_pcie_common.o 00:03:02.994 CC lib/nvme/nvme_pcie.o 00:03:02.994 CC lib/nvme/nvme.o 00:03:02.994 CC lib/nvme/nvme_qpair.o 00:03:03.561 LIB libspdk_thread.a 00:03:03.561 SO libspdk_thread.so.10.1 00:03:03.819 SYMLINK libspdk_thread.so 00:03:03.819 CC lib/nvme/nvme_quirks.o 00:03:03.819 CC lib/nvme/nvme_transport.o 00:03:03.819 CC lib/nvme/nvme_discovery.o 00:03:03.819 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.819 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.819 CC lib/nvme/nvme_tcp.o 00:03:04.077 CC lib/nvme/nvme_opal.o 00:03:04.077 CC lib/accel/accel.o 00:03:04.077 CC lib/nvme/nvme_io_msg.o 00:03:04.334 CC lib/nvme/nvme_poll_group.o 00:03:04.334 CC lib/accel/accel_rpc.o 00:03:04.590 CC lib/nvme/nvme_zns.o 00:03:04.590 CC lib/nvme/nvme_stubs.o 00:03:04.590 CC lib/accel/accel_sw.o 00:03:04.590 CC lib/nvme/nvme_auth.o 00:03:04.590 CC lib/blob/blobstore.o 00:03:04.847 CC lib/blob/request.o 00:03:04.847 CC lib/nvme/nvme_cuse.o 00:03:05.103 LIB libspdk_accel.a 00:03:05.103 SO libspdk_accel.so.15.1 00:03:05.103 CC lib/nvme/nvme_rdma.o 00:03:05.103 CC lib/blob/zeroes.o 00:03:05.103 CC lib/blob/blob_bs_dev.o 00:03:05.104 CC lib/init/json_config.o 00:03:05.104 SYMLINK libspdk_accel.so 00:03:05.364 CC lib/init/subsystem.o 00:03:05.364 CC lib/virtio/virtio.o 00:03:05.364 CC lib/bdev/bdev.o 00:03:05.364 CC lib/virtio/virtio_vhost_user.o 00:03:05.364 CC lib/init/subsystem_rpc.o 00:03:05.364 CC lib/init/rpc.o 00:03:05.364 CC lib/virtio/virtio_vfio_user.o 00:03:05.668 CC lib/virtio/virtio_pci.o 00:03:05.668 CC lib/bdev/bdev_rpc.o 00:03:05.668 CC lib/bdev/bdev_zone.o 00:03:05.668 LIB libspdk_init.a 00:03:05.668 SO libspdk_init.so.5.0 00:03:05.668 CC lib/bdev/part.o 00:03:05.668 CC lib/bdev/scsi_nvme.o 00:03:05.668 SYMLINK libspdk_init.so 00:03:05.925 LIB libspdk_virtio.a 00:03:05.925 SO libspdk_virtio.so.7.0 00:03:05.925 CC lib/event/app.o 00:03:05.925 CC lib/event/reactor.o 00:03:05.925 CC lib/event/log_rpc.o 00:03:05.925 CC lib/event/app_rpc.o 00:03:05.925 CC lib/event/scheduler_static.o 00:03:05.925 SYMLINK libspdk_virtio.so 00:03:06.489 LIB libspdk_event.a 00:03:06.489 SO libspdk_event.so.14.0 00:03:06.489 LIB libspdk_nvme.a 00:03:06.489 SYMLINK libspdk_event.so 00:03:06.745 SO libspdk_nvme.so.13.1 00:03:07.002 SYMLINK libspdk_nvme.so 00:03:07.566 LIB libspdk_blob.a 00:03:07.566 SO libspdk_blob.so.11.0 00:03:07.823 SYMLINK libspdk_blob.so 00:03:08.080 CC lib/lvol/lvol.o 00:03:08.080 CC lib/blobfs/blobfs.o 00:03:08.080 CC lib/blobfs/tree.o 00:03:08.080 LIB libspdk_bdev.a 00:03:08.080 SO libspdk_bdev.so.15.1 00:03:08.337 SYMLINK libspdk_bdev.so 00:03:08.608 CC lib/ublk/ublk.o 00:03:08.609 CC lib/ublk/ublk_rpc.o 00:03:08.609 CC lib/scsi/dev.o 00:03:08.609 CC lib/scsi/lun.o 00:03:08.609 CC lib/ftl/ftl_core.o 00:03:08.609 CC lib/nbd/nbd.o 00:03:08.609 CC lib/ftl/ftl_init.o 00:03:08.609 CC lib/nvmf/ctrlr.o 00:03:08.609 CC lib/nvmf/ctrlr_discovery.o 00:03:08.609 CC lib/nbd/nbd_rpc.o 00:03:08.866 CC lib/ftl/ftl_layout.o 00:03:08.866 CC lib/scsi/port.o 00:03:08.866 CC lib/scsi/scsi.o 00:03:08.866 LIB libspdk_nbd.a 00:03:08.866 CC lib/ftl/ftl_debug.o 00:03:08.866 SO libspdk_nbd.so.7.0 00:03:08.866 LIB libspdk_blobfs.a 00:03:09.125 SO libspdk_blobfs.so.10.0 00:03:09.125 SYMLINK libspdk_nbd.so 00:03:09.125 CC lib/scsi/scsi_bdev.o 00:03:09.125 CC lib/scsi/scsi_pr.o 00:03:09.125 CC lib/nvmf/ctrlr_bdev.o 00:03:09.125 LIB libspdk_lvol.a 00:03:09.125 CC lib/scsi/scsi_rpc.o 00:03:09.125 SYMLINK libspdk_blobfs.so 00:03:09.125 CC lib/scsi/task.o 00:03:09.125 SO libspdk_lvol.so.10.0 00:03:09.125 LIB libspdk_ublk.a 00:03:09.125 SO libspdk_ublk.so.3.0 00:03:09.125 SYMLINK libspdk_lvol.so 00:03:09.125 CC lib/nvmf/subsystem.o 00:03:09.125 CC lib/nvmf/nvmf.o 00:03:09.125 CC lib/ftl/ftl_io.o 00:03:09.125 SYMLINK libspdk_ublk.so 00:03:09.125 CC lib/nvmf/nvmf_rpc.o 00:03:09.125 CC lib/nvmf/transport.o 00:03:09.387 CC lib/nvmf/tcp.o 00:03:09.387 CC lib/nvmf/stubs.o 00:03:09.387 CC lib/ftl/ftl_sb.o 00:03:09.387 LIB libspdk_scsi.a 00:03:09.674 SO libspdk_scsi.so.9.0 00:03:09.674 CC lib/ftl/ftl_l2p.o 00:03:09.674 SYMLINK libspdk_scsi.so 00:03:09.674 CC lib/nvmf/mdns_server.o 00:03:09.674 CC lib/nvmf/rdma.o 00:03:09.937 CC lib/nvmf/auth.o 00:03:09.937 CC lib/ftl/ftl_l2p_flat.o 00:03:09.937 CC lib/iscsi/conn.o 00:03:09.937 CC lib/ftl/ftl_nv_cache.o 00:03:09.937 CC lib/ftl/ftl_band.o 00:03:10.194 CC lib/ftl/ftl_band_ops.o 00:03:10.194 CC lib/ftl/ftl_writer.o 00:03:10.194 CC lib/iscsi/init_grp.o 00:03:10.452 CC lib/ftl/ftl_rq.o 00:03:10.452 CC lib/ftl/ftl_reloc.o 00:03:10.452 CC lib/ftl/ftl_l2p_cache.o 00:03:10.452 CC lib/iscsi/iscsi.o 00:03:10.452 CC lib/ftl/ftl_p2l.o 00:03:10.710 CC lib/vhost/vhost.o 00:03:10.710 CC lib/vhost/vhost_rpc.o 00:03:10.710 CC lib/vhost/vhost_scsi.o 00:03:10.710 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.968 CC lib/vhost/vhost_blk.o 00:03:10.968 CC lib/vhost/rte_vhost_user.o 00:03:10.968 CC lib/iscsi/md5.o 00:03:10.968 CC lib/iscsi/param.o 00:03:10.968 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.227 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.227 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.227 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.227 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.486 CC lib/iscsi/portal_grp.o 00:03:11.486 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.486 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.487 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.745 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.745 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:11.745 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.745 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.745 CC lib/iscsi/tgt_node.o 00:03:11.745 LIB libspdk_nvmf.a 00:03:11.745 CC lib/ftl/utils/ftl_conf.o 00:03:11.745 CC lib/ftl/utils/ftl_md.o 00:03:11.745 SO libspdk_nvmf.so.18.1 00:03:12.002 CC lib/iscsi/iscsi_subsystem.o 00:03:12.002 CC lib/ftl/utils/ftl_mempool.o 00:03:12.002 CC lib/iscsi/iscsi_rpc.o 00:03:12.002 CC lib/iscsi/task.o 00:03:12.002 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.002 LIB libspdk_vhost.a 00:03:12.002 CC lib/ftl/utils/ftl_property.o 00:03:12.002 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.002 SYMLINK libspdk_nvmf.so 00:03:12.002 SO libspdk_vhost.so.8.0 00:03:12.002 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.259 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.259 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.259 SYMLINK libspdk_vhost.so 00:03:12.259 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.259 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.259 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:12.259 LIB libspdk_iscsi.a 00:03:12.259 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.259 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.259 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.522 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.522 SO libspdk_iscsi.so.8.0 00:03:12.522 CC lib/ftl/base/ftl_base_dev.o 00:03:12.522 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.522 CC lib/ftl/ftl_trace.o 00:03:12.522 SYMLINK libspdk_iscsi.so 00:03:12.780 LIB libspdk_ftl.a 00:03:13.038 SO libspdk_ftl.so.9.0 00:03:13.296 SYMLINK libspdk_ftl.so 00:03:13.863 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.863 CC module/accel/error/accel_error.o 00:03:13.863 CC module/keyring/linux/keyring.o 00:03:13.863 CC module/keyring/file/keyring.o 00:03:13.863 CC module/accel/ioat/accel_ioat.o 00:03:13.863 CC module/blob/bdev/blob_bdev.o 00:03:13.863 CC module/accel/dsa/accel_dsa.o 00:03:13.863 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.863 CC module/sock/posix/posix.o 00:03:13.863 CC module/sock/uring/uring.o 00:03:13.863 LIB libspdk_env_dpdk_rpc.a 00:03:13.863 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.863 CC module/keyring/linux/keyring_rpc.o 00:03:13.863 CC module/keyring/file/keyring_rpc.o 00:03:13.863 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.863 CC module/accel/error/accel_error_rpc.o 00:03:13.863 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.137 LIB libspdk_scheduler_dynamic.a 00:03:14.137 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.137 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.137 LIB libspdk_keyring_linux.a 00:03:14.137 LIB libspdk_blob_bdev.a 00:03:14.137 LIB libspdk_keyring_file.a 00:03:14.137 SO libspdk_blob_bdev.so.11.0 00:03:14.137 LIB libspdk_accel_ioat.a 00:03:14.137 SO libspdk_keyring_linux.so.1.0 00:03:14.137 LIB libspdk_accel_error.a 00:03:14.137 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.137 SO libspdk_keyring_file.so.1.0 00:03:14.137 SO libspdk_accel_error.so.2.0 00:03:14.137 SO libspdk_accel_ioat.so.6.0 00:03:14.137 CC module/accel/iaa/accel_iaa.o 00:03:14.137 SYMLINK libspdk_blob_bdev.so 00:03:14.137 SYMLINK libspdk_keyring_linux.so 00:03:14.137 LIB libspdk_accel_dsa.a 00:03:14.137 SYMLINK libspdk_keyring_file.so 00:03:14.137 SYMLINK libspdk_accel_error.so 00:03:14.137 SYMLINK libspdk_accel_ioat.so 00:03:14.403 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.403 SO libspdk_accel_dsa.so.5.0 00:03:14.404 SYMLINK libspdk_accel_dsa.so 00:03:14.404 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.404 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.404 LIB libspdk_accel_iaa.a 00:03:14.404 SO libspdk_accel_iaa.so.3.0 00:03:14.404 CC module/bdev/error/vbdev_error.o 00:03:14.404 CC module/bdev/delay/vbdev_delay.o 00:03:14.404 CC module/bdev/gpt/gpt.o 00:03:14.404 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.662 LIB libspdk_sock_uring.a 00:03:14.662 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.662 SYMLINK libspdk_accel_iaa.so 00:03:14.662 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.662 LIB libspdk_scheduler_gscheduler.a 00:03:14.662 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.662 LIB libspdk_sock_posix.a 00:03:14.662 SO libspdk_sock_uring.so.5.0 00:03:14.662 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.662 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.662 SO libspdk_sock_posix.so.6.0 00:03:14.662 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.662 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.662 SYMLINK libspdk_sock_uring.so 00:03:14.662 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.662 SYMLINK libspdk_sock_posix.so 00:03:14.662 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.662 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.920 LIB libspdk_bdev_error.a 00:03:14.920 SO libspdk_bdev_error.so.6.0 00:03:14.920 CC module/bdev/null/bdev_null.o 00:03:14.920 CC module/bdev/malloc/bdev_malloc.o 00:03:14.920 LIB libspdk_bdev_delay.a 00:03:14.920 LIB libspdk_blobfs_bdev.a 00:03:14.920 SO libspdk_bdev_delay.so.6.0 00:03:14.920 SO libspdk_blobfs_bdev.so.6.0 00:03:14.920 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.920 SYMLINK libspdk_bdev_error.so 00:03:14.920 CC module/bdev/nvme/bdev_nvme.o 00:03:14.920 CC module/bdev/raid/bdev_raid.o 00:03:14.920 LIB libspdk_bdev_gpt.a 00:03:14.920 SYMLINK libspdk_bdev_delay.so 00:03:14.920 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.920 SO libspdk_bdev_gpt.so.6.0 00:03:14.920 SYMLINK libspdk_blobfs_bdev.so 00:03:14.920 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.178 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.178 SYMLINK libspdk_bdev_gpt.so 00:03:15.178 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.179 CC module/bdev/split/vbdev_split.o 00:03:15.179 CC module/bdev/null/bdev_null_rpc.o 00:03:15.179 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.179 LIB libspdk_bdev_malloc.a 00:03:15.179 SO libspdk_bdev_malloc.so.6.0 00:03:15.179 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.179 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.437 SYMLINK libspdk_bdev_malloc.so 00:03:15.437 CC module/bdev/raid/raid0.o 00:03:15.437 LIB libspdk_bdev_null.a 00:03:15.437 CC module/bdev/raid/raid1.o 00:03:15.437 CC module/bdev/raid/concat.o 00:03:15.437 SO libspdk_bdev_null.so.6.0 00:03:15.437 LIB libspdk_bdev_split.a 00:03:15.437 SO libspdk_bdev_split.so.6.0 00:03:15.437 LIB libspdk_bdev_passthru.a 00:03:15.437 SYMLINK libspdk_bdev_null.so 00:03:15.437 LIB libspdk_bdev_lvol.a 00:03:15.437 CC module/bdev/nvme/nvme_rpc.o 00:03:15.437 SYMLINK libspdk_bdev_split.so 00:03:15.437 SO libspdk_bdev_passthru.so.6.0 00:03:15.437 SO libspdk_bdev_lvol.so.6.0 00:03:15.695 SYMLINK libspdk_bdev_passthru.so 00:03:15.695 SYMLINK libspdk_bdev_lvol.so 00:03:15.695 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.695 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.695 CC module/bdev/uring/bdev_uring.o 00:03:15.695 CC module/bdev/nvme/vbdev_opal.o 00:03:15.695 CC module/bdev/aio/bdev_aio.o 00:03:15.695 CC module/bdev/ftl/bdev_ftl.o 00:03:15.695 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.695 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.953 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.953 LIB libspdk_bdev_raid.a 00:03:15.953 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.953 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.953 SO libspdk_bdev_raid.so.6.0 00:03:15.953 LIB libspdk_bdev_ftl.a 00:03:16.229 CC module/bdev/uring/bdev_uring_rpc.o 00:03:16.229 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.229 SO libspdk_bdev_ftl.so.6.0 00:03:16.229 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.229 SYMLINK libspdk_bdev_raid.so 00:03:16.229 SYMLINK libspdk_bdev_ftl.so 00:03:16.229 LIB libspdk_bdev_iscsi.a 00:03:16.229 LIB libspdk_bdev_zone_block.a 00:03:16.229 SO libspdk_bdev_iscsi.so.6.0 00:03:16.229 SO libspdk_bdev_zone_block.so.6.0 00:03:16.229 LIB libspdk_bdev_aio.a 00:03:16.229 SYMLINK libspdk_bdev_zone_block.so 00:03:16.229 SYMLINK libspdk_bdev_iscsi.so 00:03:16.229 LIB libspdk_bdev_uring.a 00:03:16.229 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.229 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.229 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.229 SO libspdk_bdev_aio.so.6.0 00:03:16.229 SO libspdk_bdev_uring.so.6.0 00:03:16.487 SYMLINK libspdk_bdev_aio.so 00:03:16.487 SYMLINK libspdk_bdev_uring.so 00:03:16.746 LIB libspdk_bdev_virtio.a 00:03:16.746 SO libspdk_bdev_virtio.so.6.0 00:03:17.004 SYMLINK libspdk_bdev_virtio.so 00:03:17.263 LIB libspdk_bdev_nvme.a 00:03:17.263 SO libspdk_bdev_nvme.so.7.0 00:03:17.263 SYMLINK libspdk_bdev_nvme.so 00:03:17.830 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.830 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.830 CC module/event/subsystems/vmd/vmd.o 00:03:17.830 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.830 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.830 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.830 CC module/event/subsystems/sock/sock.o 00:03:17.830 CC module/event/subsystems/keyring/keyring.o 00:03:18.087 LIB libspdk_event_scheduler.a 00:03:18.087 LIB libspdk_event_vhost_blk.a 00:03:18.087 LIB libspdk_event_vmd.a 00:03:18.087 LIB libspdk_event_keyring.a 00:03:18.087 SO libspdk_event_scheduler.so.4.0 00:03:18.087 LIB libspdk_event_iobuf.a 00:03:18.087 SO libspdk_event_vhost_blk.so.3.0 00:03:18.087 LIB libspdk_event_sock.a 00:03:18.087 SO libspdk_event_vmd.so.6.0 00:03:18.087 SO libspdk_event_keyring.so.1.0 00:03:18.087 SO libspdk_event_iobuf.so.3.0 00:03:18.087 SYMLINK libspdk_event_scheduler.so 00:03:18.087 SO libspdk_event_sock.so.5.0 00:03:18.087 SYMLINK libspdk_event_keyring.so 00:03:18.087 SYMLINK libspdk_event_vhost_blk.so 00:03:18.087 SYMLINK libspdk_event_vmd.so 00:03:18.087 SYMLINK libspdk_event_iobuf.so 00:03:18.087 SYMLINK libspdk_event_sock.so 00:03:18.345 CC module/event/subsystems/accel/accel.o 00:03:18.630 LIB libspdk_event_accel.a 00:03:18.630 SO libspdk_event_accel.so.6.0 00:03:18.630 SYMLINK libspdk_event_accel.so 00:03:18.905 CC module/event/subsystems/bdev/bdev.o 00:03:19.163 LIB libspdk_event_bdev.a 00:03:19.163 SO libspdk_event_bdev.so.6.0 00:03:19.163 SYMLINK libspdk_event_bdev.so 00:03:19.422 CC module/event/subsystems/ublk/ublk.o 00:03:19.422 CC module/event/subsystems/nbd/nbd.o 00:03:19.422 CC module/event/subsystems/scsi/scsi.o 00:03:19.422 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:19.422 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.681 LIB libspdk_event_nbd.a 00:03:19.681 LIB libspdk_event_ublk.a 00:03:19.681 SO libspdk_event_nbd.so.6.0 00:03:19.681 LIB libspdk_event_scsi.a 00:03:19.681 SO libspdk_event_ublk.so.3.0 00:03:19.681 SO libspdk_event_scsi.so.6.0 00:03:19.681 SYMLINK libspdk_event_nbd.so 00:03:19.681 LIB libspdk_event_nvmf.a 00:03:19.681 SYMLINK libspdk_event_ublk.so 00:03:19.681 SO libspdk_event_nvmf.so.6.0 00:03:19.939 SYMLINK libspdk_event_scsi.so 00:03:19.939 SYMLINK libspdk_event_nvmf.so 00:03:20.197 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.197 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.197 LIB libspdk_event_vhost_scsi.a 00:03:20.197 LIB libspdk_event_iscsi.a 00:03:20.197 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.197 SO libspdk_event_iscsi.so.6.0 00:03:20.456 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.456 SYMLINK libspdk_event_iscsi.so 00:03:20.456 SO libspdk.so.6.0 00:03:20.456 SYMLINK libspdk.so 00:03:20.714 CXX app/trace/trace.o 00:03:20.714 CC app/trace_record/trace_record.o 00:03:20.714 CC app/spdk_nvme_perf/perf.o 00:03:20.714 CC app/spdk_lspci/spdk_lspci.o 00:03:20.973 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.973 CC app/nvmf_tgt/nvmf_main.o 00:03:20.973 CC examples/util/zipf/zipf.o 00:03:20.973 CC app/spdk_tgt/spdk_tgt.o 00:03:20.973 CC test/thread/poller_perf/poller_perf.o 00:03:20.973 CC test/dma/test_dma/test_dma.o 00:03:20.973 LINK spdk_lspci 00:03:20.973 LINK zipf 00:03:20.973 LINK nvmf_tgt 00:03:21.232 LINK iscsi_tgt 00:03:21.232 LINK poller_perf 00:03:21.232 LINK spdk_trace_record 00:03:21.232 LINK spdk_tgt 00:03:21.232 LINK spdk_trace 00:03:21.232 CC app/spdk_nvme_identify/identify.o 00:03:21.490 CC examples/ioat/perf/perf.o 00:03:21.490 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.490 CC examples/vmd/lsvmd/lsvmd.o 00:03:21.490 CC examples/idxd/perf/perf.o 00:03:21.490 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.490 LINK test_dma 00:03:21.490 CC examples/vmd/led/led.o 00:03:21.748 LINK lsvmd 00:03:21.748 LINK spdk_nvme_discover 00:03:21.748 LINK ioat_perf 00:03:21.748 CC examples/thread/thread/thread_ex.o 00:03:21.748 LINK led 00:03:21.748 LINK interrupt_tgt 00:03:21.748 LINK spdk_nvme_perf 00:03:21.748 LINK idxd_perf 00:03:22.006 CC examples/ioat/verify/verify.o 00:03:22.006 CC app/spdk_top/spdk_top.o 00:03:22.006 LINK thread 00:03:22.006 CC test/app/bdev_svc/bdev_svc.o 00:03:22.006 CC test/app/histogram_perf/histogram_perf.o 00:03:22.006 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.006 CC examples/sock/hello_world/hello_sock.o 00:03:22.006 CC app/vhost/vhost.o 00:03:22.006 CC test/app/jsoncat/jsoncat.o 00:03:22.006 LINK spdk_nvme_identify 00:03:22.006 LINK verify 00:03:22.264 LINK histogram_perf 00:03:22.264 LINK bdev_svc 00:03:22.264 LINK jsoncat 00:03:22.264 LINK vhost 00:03:22.264 CC app/spdk_dd/spdk_dd.o 00:03:22.264 LINK hello_sock 00:03:22.522 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.522 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.522 CC test/app/stub/stub.o 00:03:22.522 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.522 LINK nvme_fuzz 00:03:22.522 LINK stub 00:03:22.522 CC test/blobfs/mkfs/mkfs.o 00:03:22.780 CC examples/accel/perf/accel_perf.o 00:03:22.780 CC app/fio/nvme/fio_plugin.o 00:03:22.780 LINK spdk_dd 00:03:22.780 LINK spdk_top 00:03:22.780 CC examples/blob/hello_world/hello_blob.o 00:03:22.780 CC examples/nvme/hello_world/hello_world.o 00:03:22.780 LINK vhost_fuzz 00:03:22.780 LINK mkfs 00:03:23.112 CC examples/blob/cli/blobcli.o 00:03:23.112 TEST_HEADER include/spdk/accel.h 00:03:23.112 TEST_HEADER include/spdk/accel_module.h 00:03:23.112 TEST_HEADER include/spdk/assert.h 00:03:23.112 TEST_HEADER include/spdk/barrier.h 00:03:23.112 TEST_HEADER include/spdk/base64.h 00:03:23.112 TEST_HEADER include/spdk/bdev.h 00:03:23.112 TEST_HEADER include/spdk/bdev_module.h 00:03:23.112 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.112 TEST_HEADER include/spdk/bit_array.h 00:03:23.112 TEST_HEADER include/spdk/bit_pool.h 00:03:23.112 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.112 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.112 TEST_HEADER include/spdk/blobfs.h 00:03:23.112 TEST_HEADER include/spdk/blob.h 00:03:23.112 TEST_HEADER include/spdk/conf.h 00:03:23.112 TEST_HEADER include/spdk/config.h 00:03:23.112 TEST_HEADER include/spdk/cpuset.h 00:03:23.112 TEST_HEADER include/spdk/crc16.h 00:03:23.112 TEST_HEADER include/spdk/crc32.h 00:03:23.112 TEST_HEADER include/spdk/crc64.h 00:03:23.112 TEST_HEADER include/spdk/dif.h 00:03:23.112 LINK hello_world 00:03:23.112 TEST_HEADER include/spdk/dma.h 00:03:23.112 TEST_HEADER include/spdk/endian.h 00:03:23.112 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.112 TEST_HEADER include/spdk/env.h 00:03:23.112 TEST_HEADER include/spdk/event.h 00:03:23.112 CC examples/nvme/reconnect/reconnect.o 00:03:23.112 TEST_HEADER include/spdk/fd_group.h 00:03:23.112 LINK hello_blob 00:03:23.112 TEST_HEADER include/spdk/fd.h 00:03:23.112 TEST_HEADER include/spdk/file.h 00:03:23.112 TEST_HEADER include/spdk/ftl.h 00:03:23.112 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.112 TEST_HEADER include/spdk/hexlify.h 00:03:23.112 TEST_HEADER include/spdk/histogram_data.h 00:03:23.112 TEST_HEADER include/spdk/idxd.h 00:03:23.112 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.112 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.112 TEST_HEADER include/spdk/init.h 00:03:23.112 TEST_HEADER include/spdk/ioat.h 00:03:23.112 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.112 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.112 TEST_HEADER include/spdk/json.h 00:03:23.112 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.112 TEST_HEADER include/spdk/keyring.h 00:03:23.112 TEST_HEADER include/spdk/keyring_module.h 00:03:23.112 TEST_HEADER include/spdk/likely.h 00:03:23.112 TEST_HEADER include/spdk/log.h 00:03:23.112 TEST_HEADER include/spdk/lvol.h 00:03:23.112 LINK accel_perf 00:03:23.112 TEST_HEADER include/spdk/memory.h 00:03:23.112 TEST_HEADER include/spdk/mmio.h 00:03:23.112 TEST_HEADER include/spdk/nbd.h 00:03:23.112 TEST_HEADER include/spdk/notify.h 00:03:23.112 TEST_HEADER include/spdk/nvme.h 00:03:23.112 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.112 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.112 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.112 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.112 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.112 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.112 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.112 TEST_HEADER include/spdk/nvmf.h 00:03:23.112 CC examples/nvme/arbitration/arbitration.o 00:03:23.112 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.112 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.112 TEST_HEADER include/spdk/opal.h 00:03:23.112 TEST_HEADER include/spdk/opal_spec.h 00:03:23.112 TEST_HEADER include/spdk/pci_ids.h 00:03:23.112 TEST_HEADER include/spdk/pipe.h 00:03:23.112 TEST_HEADER include/spdk/queue.h 00:03:23.112 TEST_HEADER include/spdk/reduce.h 00:03:23.112 TEST_HEADER include/spdk/rpc.h 00:03:23.112 TEST_HEADER include/spdk/scheduler.h 00:03:23.112 TEST_HEADER include/spdk/scsi.h 00:03:23.112 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.112 TEST_HEADER include/spdk/sock.h 00:03:23.112 TEST_HEADER include/spdk/stdinc.h 00:03:23.112 TEST_HEADER include/spdk/string.h 00:03:23.112 TEST_HEADER include/spdk/thread.h 00:03:23.112 TEST_HEADER include/spdk/trace.h 00:03:23.112 TEST_HEADER include/spdk/trace_parser.h 00:03:23.374 TEST_HEADER include/spdk/tree.h 00:03:23.374 TEST_HEADER include/spdk/ublk.h 00:03:23.374 TEST_HEADER include/spdk/util.h 00:03:23.374 TEST_HEADER include/spdk/uuid.h 00:03:23.374 TEST_HEADER include/spdk/version.h 00:03:23.374 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.374 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.374 TEST_HEADER include/spdk/vhost.h 00:03:23.374 TEST_HEADER include/spdk/vmd.h 00:03:23.374 TEST_HEADER include/spdk/xor.h 00:03:23.374 TEST_HEADER include/spdk/zipf.h 00:03:23.374 CXX test/cpp_headers/accel.o 00:03:23.374 CXX test/cpp_headers/accel_module.o 00:03:23.374 LINK spdk_nvme 00:03:23.374 LINK blobcli 00:03:23.374 CC examples/nvme/hotplug/hotplug.o 00:03:23.374 LINK reconnect 00:03:23.374 CXX test/cpp_headers/assert.o 00:03:23.374 CC app/fio/bdev/fio_plugin.o 00:03:23.374 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.631 LINK arbitration 00:03:23.631 CXX test/cpp_headers/barrier.o 00:03:23.631 LINK nvme_manage 00:03:23.631 LINK hotplug 00:03:23.631 LINK cmb_copy 00:03:23.631 CC examples/nvme/abort/abort.o 00:03:23.889 CXX test/cpp_headers/base64.o 00:03:23.889 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.889 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.889 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.889 LINK spdk_bdev 00:03:23.889 CXX test/cpp_headers/bdev.o 00:03:24.146 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.146 LINK iscsi_fuzz 00:03:24.146 LINK pmr_persistence 00:03:24.146 CC test/event/event_perf/event_perf.o 00:03:24.146 LINK abort 00:03:24.146 LINK hello_bdev 00:03:24.146 CXX test/cpp_headers/bdev_module.o 00:03:24.146 CC test/lvol/esnap/esnap.o 00:03:24.146 CC test/event/reactor/reactor.o 00:03:24.146 LINK event_perf 00:03:24.404 CC test/event/reactor_perf/reactor_perf.o 00:03:24.404 LINK reactor 00:03:24.404 CXX test/cpp_headers/bdev_zone.o 00:03:24.404 CC test/event/app_repeat/app_repeat.o 00:03:24.404 LINK mem_callbacks 00:03:24.404 CC test/event/scheduler/scheduler.o 00:03:24.404 CC test/env/vtophys/vtophys.o 00:03:24.404 LINK reactor_perf 00:03:24.404 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.662 LINK app_repeat 00:03:24.662 CXX test/cpp_headers/bit_array.o 00:03:24.662 CC test/env/memory/memory_ut.o 00:03:24.662 LINK vtophys 00:03:24.662 CC test/env/pci/pci_ut.o 00:03:24.662 LINK env_dpdk_post_init 00:03:24.662 LINK scheduler 00:03:24.919 LINK bdevperf 00:03:24.919 CXX test/cpp_headers/bit_pool.o 00:03:24.919 CC test/nvme/aer/aer.o 00:03:24.919 CC test/nvme/reset/reset.o 00:03:24.919 CC test/nvme/sgl/sgl.o 00:03:24.919 CC test/nvme/e2edp/nvme_dp.o 00:03:24.919 CXX test/cpp_headers/blob_bdev.o 00:03:24.919 CC test/rpc_client/rpc_client_test.o 00:03:25.177 LINK pci_ut 00:03:25.177 LINK aer 00:03:25.177 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.177 LINK reset 00:03:25.177 LINK sgl 00:03:25.177 LINK rpc_client_test 00:03:25.177 CC examples/nvmf/nvmf/nvmf.o 00:03:25.177 LINK nvme_dp 00:03:25.435 CXX test/cpp_headers/blobfs.o 00:03:25.435 CXX test/cpp_headers/blob.o 00:03:25.435 CC test/nvme/overhead/overhead.o 00:03:25.435 CC test/nvme/err_injection/err_injection.o 00:03:25.435 CC test/nvme/startup/startup.o 00:03:25.435 CC test/nvme/reserve/reserve.o 00:03:25.435 CC test/nvme/simple_copy/simple_copy.o 00:03:25.693 CXX test/cpp_headers/conf.o 00:03:25.693 LINK nvmf 00:03:25.693 CC test/nvme/connect_stress/connect_stress.o 00:03:25.693 LINK err_injection 00:03:25.693 LINK startup 00:03:25.693 CXX test/cpp_headers/config.o 00:03:25.693 LINK overhead 00:03:25.693 LINK reserve 00:03:25.693 CXX test/cpp_headers/cpuset.o 00:03:25.693 LINK simple_copy 00:03:25.951 LINK memory_ut 00:03:25.951 LINK connect_stress 00:03:25.951 CXX test/cpp_headers/crc16.o 00:03:25.951 CC test/nvme/boot_partition/boot_partition.o 00:03:25.951 CC test/nvme/compliance/nvme_compliance.o 00:03:25.951 CC test/accel/dif/dif.o 00:03:25.951 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.951 CC test/nvme/fdp/fdp.o 00:03:25.951 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.208 CXX test/cpp_headers/crc32.o 00:03:26.208 CXX test/cpp_headers/crc64.o 00:03:26.208 CC test/nvme/cuse/cuse.o 00:03:26.208 LINK boot_partition 00:03:26.208 LINK fused_ordering 00:03:26.208 LINK doorbell_aers 00:03:26.208 CXX test/cpp_headers/dif.o 00:03:26.208 CXX test/cpp_headers/dma.o 00:03:26.466 LINK nvme_compliance 00:03:26.466 CXX test/cpp_headers/endian.o 00:03:26.466 LINK fdp 00:03:26.466 CXX test/cpp_headers/env_dpdk.o 00:03:26.466 CXX test/cpp_headers/env.o 00:03:26.466 CXX test/cpp_headers/event.o 00:03:26.466 LINK dif 00:03:26.466 CXX test/cpp_headers/fd_group.o 00:03:26.466 CXX test/cpp_headers/fd.o 00:03:26.466 CXX test/cpp_headers/file.o 00:03:26.466 CXX test/cpp_headers/ftl.o 00:03:26.724 CXX test/cpp_headers/gpt_spec.o 00:03:26.724 CXX test/cpp_headers/hexlify.o 00:03:26.724 CXX test/cpp_headers/histogram_data.o 00:03:26.724 CXX test/cpp_headers/idxd.o 00:03:26.724 CXX test/cpp_headers/idxd_spec.o 00:03:26.724 CXX test/cpp_headers/init.o 00:03:26.724 CXX test/cpp_headers/ioat.o 00:03:26.724 CXX test/cpp_headers/ioat_spec.o 00:03:26.724 CXX test/cpp_headers/iscsi_spec.o 00:03:26.724 CXX test/cpp_headers/json.o 00:03:26.982 CXX test/cpp_headers/jsonrpc.o 00:03:26.982 CXX test/cpp_headers/keyring.o 00:03:26.982 CXX test/cpp_headers/keyring_module.o 00:03:26.982 CXX test/cpp_headers/likely.o 00:03:26.982 CXX test/cpp_headers/log.o 00:03:26.982 CXX test/cpp_headers/lvol.o 00:03:26.982 CXX test/cpp_headers/memory.o 00:03:26.982 CC test/bdev/bdevio/bdevio.o 00:03:26.982 CXX test/cpp_headers/mmio.o 00:03:26.982 CXX test/cpp_headers/nbd.o 00:03:26.982 CXX test/cpp_headers/notify.o 00:03:26.982 CXX test/cpp_headers/nvme.o 00:03:26.982 CXX test/cpp_headers/nvme_intel.o 00:03:27.241 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.241 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:27.241 CXX test/cpp_headers/nvme_spec.o 00:03:27.241 CXX test/cpp_headers/nvme_zns.o 00:03:27.241 CXX test/cpp_headers/nvmf_cmd.o 00:03:27.241 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:27.241 CXX test/cpp_headers/nvmf.o 00:03:27.241 CXX test/cpp_headers/nvmf_spec.o 00:03:27.241 CXX test/cpp_headers/nvmf_transport.o 00:03:27.498 CXX test/cpp_headers/opal.o 00:03:27.498 CXX test/cpp_headers/opal_spec.o 00:03:27.498 LINK bdevio 00:03:27.498 CXX test/cpp_headers/pci_ids.o 00:03:27.498 CXX test/cpp_headers/pipe.o 00:03:27.498 CXX test/cpp_headers/queue.o 00:03:27.498 CXX test/cpp_headers/reduce.o 00:03:27.498 CXX test/cpp_headers/rpc.o 00:03:27.498 CXX test/cpp_headers/scheduler.o 00:03:27.498 LINK cuse 00:03:27.498 CXX test/cpp_headers/scsi.o 00:03:27.756 CXX test/cpp_headers/scsi_spec.o 00:03:27.756 CXX test/cpp_headers/sock.o 00:03:27.756 CXX test/cpp_headers/stdinc.o 00:03:27.756 CXX test/cpp_headers/string.o 00:03:27.756 CXX test/cpp_headers/thread.o 00:03:27.756 CXX test/cpp_headers/trace.o 00:03:27.756 CXX test/cpp_headers/trace_parser.o 00:03:27.756 CXX test/cpp_headers/tree.o 00:03:27.756 CXX test/cpp_headers/ublk.o 00:03:27.756 CXX test/cpp_headers/util.o 00:03:27.756 CXX test/cpp_headers/uuid.o 00:03:27.756 CXX test/cpp_headers/version.o 00:03:27.756 CXX test/cpp_headers/vfio_user_pci.o 00:03:27.756 CXX test/cpp_headers/vfio_user_spec.o 00:03:27.756 CXX test/cpp_headers/vhost.o 00:03:27.756 CXX test/cpp_headers/vmd.o 00:03:28.013 CXX test/cpp_headers/xor.o 00:03:28.013 CXX test/cpp_headers/zipf.o 00:03:29.429 LINK esnap 00:03:29.999 00:03:29.999 real 1m4.193s 00:03:29.999 user 6m31.375s 00:03:29.999 sys 1m35.369s 00:03:29.999 22:33:45 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:29.999 22:33:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.999 ************************************ 00:03:29.999 END TEST make 00:03:29.999 ************************************ 00:03:29.999 22:33:45 -- common/autotest_common.sh@1142 -- $ return 0 00:03:29.999 22:33:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:29.999 22:33:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:29.999 22:33:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:29.999 22:33:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.999 22:33:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:29.999 22:33:45 -- pm/common@44 -- $ pid=5134 00:03:29.999 22:33:45 -- pm/common@50 -- $ kill -TERM 5134 00:03:29.999 22:33:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.999 22:33:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:29.999 22:33:45 -- pm/common@44 -- $ pid=5136 00:03:29.999 22:33:45 -- pm/common@50 -- $ kill -TERM 5136 00:03:29.999 22:33:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:29.999 22:33:45 -- nvmf/common.sh@7 -- # uname -s 00:03:29.999 22:33:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:29.999 22:33:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:29.999 22:33:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:29.999 22:33:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:29.999 22:33:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:29.999 22:33:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:29.999 22:33:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:29.999 22:33:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:29.999 22:33:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:29.999 22:33:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:29.999 22:33:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:03:29.999 22:33:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:03:29.999 22:33:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:29.999 22:33:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:29.999 22:33:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:29.999 22:33:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:29.999 22:33:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:29.999 22:33:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:29.999 22:33:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.999 22:33:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.999 22:33:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.999 22:33:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.999 22:33:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.999 22:33:45 -- paths/export.sh@5 -- # export PATH 00:03:29.999 22:33:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.999 22:33:45 -- nvmf/common.sh@47 -- # : 0 00:03:29.999 22:33:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:29.999 22:33:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:29.999 22:33:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:29.999 22:33:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:29.999 22:33:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:29.999 22:33:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:29.999 22:33:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:29.999 22:33:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:29.999 22:33:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:29.999 22:33:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:29.999 22:33:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:29.999 22:33:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:29.999 22:33:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.999 22:33:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:29.999 22:33:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.999 22:33:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.999 22:33:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.257 22:33:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.257 22:33:45 -- spdk/autotest.sh@48 -- # udevadm_pid=52756 00:03:30.257 22:33:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.257 22:33:45 -- pm/common@17 -- # local monitor 00:03:30.257 22:33:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.257 22:33:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.257 22:33:45 -- pm/common@25 -- # sleep 1 00:03:30.257 22:33:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.257 22:33:45 -- pm/common@21 -- # date +%s 00:03:30.257 22:33:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721082825 00:03:30.257 22:33:45 -- pm/common@21 -- # date +%s 00:03:30.257 22:33:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721082825 00:03:30.258 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721082825_collect-vmstat.pm.log 00:03:30.258 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721082825_collect-cpu-load.pm.log 00:03:31.196 22:33:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.196 22:33:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.196 22:33:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:31.196 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:03:31.196 22:33:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.196 22:33:46 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:31.196 22:33:46 -- common/autotest_common.sh@10 -- # set +x 00:03:31.196 22:33:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:31.196 22:33:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:31.196 22:33:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:31.196 22:33:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:31.196 22:33:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:31.196 22:33:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.196 22:33:46 -- common/autotest_common.sh@1455 -- # uname 00:03:31.196 22:33:46 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:31.196 22:33:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.196 22:33:46 -- common/autotest_common.sh@1475 -- # uname 00:03:31.196 22:33:46 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:31.196 22:33:46 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:31.196 22:33:46 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:31.196 22:33:46 -- spdk/autotest.sh@72 -- # hash lcov 00:03:31.196 22:33:46 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:31.196 22:33:46 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:31.196 --rc lcov_branch_coverage=1 00:03:31.196 --rc lcov_function_coverage=1 00:03:31.196 --rc genhtml_branch_coverage=1 00:03:31.196 --rc genhtml_function_coverage=1 00:03:31.196 --rc genhtml_legend=1 00:03:31.196 --rc geninfo_all_blocks=1 00:03:31.196 ' 00:03:31.196 22:33:46 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:31.196 --rc lcov_branch_coverage=1 00:03:31.196 --rc lcov_function_coverage=1 00:03:31.196 --rc genhtml_branch_coverage=1 00:03:31.196 --rc genhtml_function_coverage=1 00:03:31.196 --rc genhtml_legend=1 00:03:31.196 --rc geninfo_all_blocks=1 00:03:31.196 ' 00:03:31.196 22:33:46 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:31.196 --rc lcov_branch_coverage=1 00:03:31.196 --rc lcov_function_coverage=1 00:03:31.196 --rc genhtml_branch_coverage=1 00:03:31.196 --rc genhtml_function_coverage=1 00:03:31.196 --rc genhtml_legend=1 00:03:31.196 --rc geninfo_all_blocks=1 00:03:31.196 --no-external' 00:03:31.196 22:33:46 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:31.196 --rc lcov_branch_coverage=1 00:03:31.196 --rc lcov_function_coverage=1 00:03:31.196 --rc genhtml_branch_coverage=1 00:03:31.196 --rc genhtml_function_coverage=1 00:03:31.196 --rc genhtml_legend=1 00:03:31.196 --rc geninfo_all_blocks=1 00:03:31.196 --no-external' 00:03:31.196 22:33:46 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:31.196 lcov: LCOV version 1.14 00:03:31.196 22:33:46 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:46.071 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.071 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:58.304 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:58.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:58.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:58.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:58.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:58.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:02.744 22:34:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:02.744 22:34:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:02.744 22:34:17 -- common/autotest_common.sh@10 -- # set +x 00:04:02.744 22:34:17 -- spdk/autotest.sh@91 -- # rm -f 00:04:02.744 22:34:17 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.744 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:03.002 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:03.002 22:34:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:03.002 22:34:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:03.002 22:34:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:03.002 22:34:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:03.002 22:34:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.002 22:34:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:03.002 22:34:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:03.002 22:34:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.002 22:34:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.002 22:34:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.002 22:34:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:03.002 22:34:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:03.002 22:34:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:03.002 22:34:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.002 22:34:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.002 22:34:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:03.002 22:34:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:03.002 22:34:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:03.002 22:34:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.002 22:34:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.002 22:34:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:03.002 22:34:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:03.003 22:34:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:03.003 22:34:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.003 22:34:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:03.003 22:34:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.003 22:34:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:03.003 22:34:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:03.003 22:34:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:03.003 22:34:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.003 No valid GPT data, bailing 00:04:03.003 22:34:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.003 22:34:18 -- scripts/common.sh@391 -- # pt= 00:04:03.003 22:34:18 -- scripts/common.sh@392 -- # return 1 00:04:03.003 22:34:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.003 1+0 records in 00:04:03.003 1+0 records out 00:04:03.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386258 s, 271 MB/s 00:04:03.003 22:34:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.003 22:34:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:03.003 22:34:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:03.003 22:34:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:03.003 22:34:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:03.003 No valid GPT data, bailing 00:04:03.003 22:34:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:03.003 22:34:18 -- scripts/common.sh@391 -- # pt= 00:04:03.003 22:34:18 -- scripts/common.sh@392 -- # return 1 00:04:03.003 22:34:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:03.003 1+0 records in 00:04:03.003 1+0 records out 00:04:03.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514194 s, 204 MB/s 00:04:03.003 22:34:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.003 22:34:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:03.003 22:34:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:03.003 22:34:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:03.003 22:34:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:03.003 No valid GPT data, bailing 00:04:03.003 22:34:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:03.260 22:34:18 -- scripts/common.sh@391 -- # pt= 00:04:03.260 22:34:18 -- scripts/common.sh@392 -- # return 1 00:04:03.260 22:34:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:03.260 1+0 records in 00:04:03.260 1+0 records out 00:04:03.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775518 s, 135 MB/s 00:04:03.260 22:34:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.260 22:34:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:03.260 22:34:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:03.260 22:34:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:03.260 22:34:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:03.260 No valid GPT data, bailing 00:04:03.260 22:34:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:03.260 22:34:18 -- scripts/common.sh@391 -- # pt= 00:04:03.260 22:34:18 -- scripts/common.sh@392 -- # return 1 00:04:03.260 22:34:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:03.260 1+0 records in 00:04:03.260 1+0 records out 00:04:03.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507428 s, 207 MB/s 00:04:03.260 22:34:18 -- spdk/autotest.sh@118 -- # sync 00:04:03.260 22:34:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.260 22:34:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.260 22:34:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:05.161 22:34:20 -- spdk/autotest.sh@124 -- # uname -s 00:04:05.161 22:34:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:05.161 22:34:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:05.161 22:34:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.161 22:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.161 22:34:20 -- common/autotest_common.sh@10 -- # set +x 00:04:05.161 ************************************ 00:04:05.161 START TEST setup.sh 00:04:05.161 ************************************ 00:04:05.161 22:34:20 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:05.161 * Looking for test storage... 00:04:05.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.161 22:34:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:05.161 22:34:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:05.161 22:34:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:05.161 22:34:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.161 22:34:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.161 22:34:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.161 ************************************ 00:04:05.161 START TEST acl 00:04:05.161 ************************************ 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:05.161 * Looking for test storage... 00:04:05.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.161 22:34:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:05.161 22:34:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.161 22:34:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:05.161 22:34:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:05.161 22:34:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:05.161 22:34:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:05.161 22:34:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:05.161 22:34:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.161 22:34:20 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.094 22:34:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:06.094 22:34:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:06.094 22:34:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.094 22:34:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:06.094 22:34:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.094 22:34:21 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 Hugepages 00:04:06.662 node hugesize free / total 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 00:04:06.662 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.662 22:34:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.920 22:34:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:06.921 22:34:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:06.921 22:34:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.921 22:34:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.921 22:34:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.921 ************************************ 00:04:06.921 START TEST denied 00:04:06.921 ************************************ 00:04:06.921 22:34:22 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:06.921 22:34:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:06.921 22:34:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:06.921 22:34:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:06.921 22:34:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.921 22:34:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.910 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.910 22:34:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.168 ************************************ 00:04:08.168 END TEST denied 00:04:08.168 ************************************ 00:04:08.168 00:04:08.168 real 0m1.430s 00:04:08.168 user 0m0.526s 00:04:08.168 sys 0m0.826s 00:04:08.168 22:34:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.168 22:34:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:08.427 22:34:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:08.427 22:34:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:08.427 22:34:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.427 22:34:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.427 22:34:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.427 ************************************ 00:04:08.427 START TEST allowed 00:04:08.427 ************************************ 00:04:08.427 22:34:23 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:08.427 22:34:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:08.427 22:34:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:08.427 22:34:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.427 22:34:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:08.427 22:34:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.992 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.992 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:08.992 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:08.992 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:08.992 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:08.992 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:09.251 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.251 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.251 22:34:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:09.251 22:34:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.251 22:34:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.818 00:04:09.818 real 0m1.490s 00:04:09.818 user 0m0.642s 00:04:09.818 sys 0m0.833s 00:04:09.818 22:34:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.818 22:34:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:09.818 ************************************ 00:04:09.818 END TEST allowed 00:04:09.818 ************************************ 00:04:09.818 22:34:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:09.818 00:04:09.818 real 0m4.686s 00:04:09.818 user 0m1.969s 00:04:09.818 sys 0m2.625s 00:04:09.818 22:34:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.819 22:34:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:09.819 ************************************ 00:04:09.819 END TEST acl 00:04:09.819 ************************************ 00:04:09.819 22:34:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:09.819 22:34:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:09.819 22:34:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.819 22:34:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.819 22:34:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.819 ************************************ 00:04:09.819 START TEST hugepages 00:04:09.819 ************************************ 00:04:09.819 22:34:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:10.080 * Looking for test storage... 00:04:10.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6021444 kB' 'MemAvailable: 7401344 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 437128 kB' 'Inactive: 1265264 kB' 'Active(anon): 116268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265264 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 107524 kB' 'Mapped: 48936 kB' 'Shmem: 10488 kB' 'KReclaimable: 61432 kB' 'Slab: 132880 kB' 'SReclaimable: 61432 kB' 'SUnreclaim: 71448 kB' 'KernelStack: 6428 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 340272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.080 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.081 22:34:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:10.082 22:34:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.082 22:34:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.082 22:34:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.082 ************************************ 00:04:10.082 START TEST default_setup 00:04:10.082 ************************************ 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.082 22:34:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.650 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.918 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8043140 kB' 'MemAvailable: 9422856 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453820 kB' 'Inactive: 1265272 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132504 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71460 kB' 'KernelStack: 6272 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.918 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.919 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8043016 kB' 'MemAvailable: 9422732 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453524 kB' 'Inactive: 1265272 kB' 'Active(anon): 132664 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123600 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132524 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71480 kB' 'KernelStack: 6256 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.920 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.921 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8043016 kB' 'MemAvailable: 9422732 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453600 kB' 'Inactive: 1265272 kB' 'Active(anon): 132740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123644 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132524 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71480 kB' 'KernelStack: 6256 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.922 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.923 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:10.924 nr_hugepages=1024 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.924 resv_hugepages=0 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.924 surplus_hugepages=0 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.924 anon_hugepages=0 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8043016 kB' 'MemAvailable: 9422732 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453312 kB' 'Inactive: 1265272 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132524 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71480 kB' 'KernelStack: 6240 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.924 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.925 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8043016 kB' 'MemUsed: 4198960 kB' 'SwapCached: 0 kB' 'Active: 453512 kB' 'Inactive: 1265272 kB' 'Active(anon): 132652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1596608 kB' 'Mapped: 48616 kB' 'AnonPages: 123608 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61044 kB' 'Slab: 132524 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.926 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.927 node0=1024 expecting 1024 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.927 00:04:10.927 real 0m0.971s 00:04:10.927 user 0m0.460s 00:04:10.927 sys 0m0.467s 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.927 22:34:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 ************************************ 00:04:10.927 END TEST default_setup 00:04:10.927 ************************************ 00:04:11.186 22:34:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.186 22:34:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:11.186 22:34:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.186 22:34:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.186 22:34:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.186 ************************************ 00:04:11.186 START TEST per_node_1G_alloc 00:04:11.186 ************************************ 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.186 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.451 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.451 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9101592 kB' 'MemAvailable: 10481320 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 454164 kB' 'Inactive: 1265284 kB' 'Active(anon): 133304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 124500 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132504 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71460 kB' 'KernelStack: 6276 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9101092 kB' 'MemAvailable: 10480820 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453656 kB' 'Inactive: 1265284 kB' 'Active(anon): 132796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123960 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132508 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71464 kB' 'KernelStack: 6212 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9101092 kB' 'MemAvailable: 10480820 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453352 kB' 'Inactive: 1265284 kB' 'Active(anon): 132492 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123652 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132508 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71464 kB' 'KernelStack: 6240 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.454 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.455 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:11.456 nr_hugepages=512 00:04:11.456 resv_hugepages=0 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.456 surplus_hugepages=0 00:04:11.456 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.456 anon_hugepages=0 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9101092 kB' 'MemAvailable: 10480820 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453176 kB' 'Inactive: 1265284 kB' 'Active(anon): 132316 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123488 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132508 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71464 kB' 'KernelStack: 6240 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.458 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9101772 kB' 'MemUsed: 3140204 kB' 'SwapCached: 0 kB' 'Active: 453372 kB' 'Inactive: 1265284 kB' 'Active(anon): 132512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1596608 kB' 'Mapped: 48604 kB' 'AnonPages: 123684 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61044 kB' 'Slab: 132500 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.459 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.460 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.719 node0=512 expecting 512 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.719 00:04:11.719 real 0m0.527s 00:04:11.719 user 0m0.260s 00:04:11.719 sys 0m0.297s 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.719 22:34:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.719 ************************************ 00:04:11.719 END TEST per_node_1G_alloc 00:04:11.719 ************************************ 00:04:11.719 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.719 22:34:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:11.719 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.719 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.719 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.719 ************************************ 00:04:11.719 START TEST even_2G_alloc 00:04:11.719 ************************************ 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.719 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.982 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.982 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8044900 kB' 'MemAvailable: 9424628 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453808 kB' 'Inactive: 1265284 kB' 'Active(anon): 132948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 124136 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132488 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6276 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.982 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8045152 kB' 'MemAvailable: 9424880 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453512 kB' 'Inactive: 1265284 kB' 'Active(anon): 132652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123820 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132492 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71448 kB' 'KernelStack: 6256 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.983 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.984 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8045404 kB' 'MemAvailable: 9425132 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453216 kB' 'Inactive: 1265284 kB' 'Active(anon): 132356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123504 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132492 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71448 kB' 'KernelStack: 6224 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.985 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.986 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.248 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.249 nr_hugepages=1024 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.249 resv_hugepages=0 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.249 surplus_hugepages=0 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.249 anon_hugepages=0 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8045404 kB' 'MemAvailable: 9425132 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453216 kB' 'Inactive: 1265284 kB' 'Active(anon): 132356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123504 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132492 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71448 kB' 'KernelStack: 6224 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.249 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.250 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8045404 kB' 'MemUsed: 4196572 kB' 'SwapCached: 0 kB' 'Active: 453468 kB' 'Inactive: 1265284 kB' 'Active(anon): 132608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1596608 kB' 'Mapped: 48604 kB' 'AnonPages: 123756 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61044 kB' 'Slab: 132492 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.251 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.252 node0=1024 expecting 1024 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.252 00:04:12.252 real 0m0.549s 00:04:12.252 user 0m0.252s 00:04:12.252 sys 0m0.310s 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.252 22:34:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.252 ************************************ 00:04:12.252 END TEST even_2G_alloc 00:04:12.252 ************************************ 00:04:12.252 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:12.252 22:34:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:12.252 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.252 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.252 22:34:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.252 ************************************ 00:04:12.252 START TEST odd_alloc 00:04:12.252 ************************************ 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.252 22:34:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.512 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.512 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.512 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8053620 kB' 'MemAvailable: 9433348 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453544 kB' 'Inactive: 1265284 kB' 'Active(anon): 132684 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123752 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132512 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71468 kB' 'KernelStack: 6236 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.513 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.514 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8053620 kB' 'MemAvailable: 9433348 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453556 kB' 'Inactive: 1265284 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123824 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132512 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71468 kB' 'KernelStack: 6236 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.794 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8053620 kB' 'MemAvailable: 9433348 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453532 kB' 'Inactive: 1265284 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123824 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132516 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71472 kB' 'KernelStack: 6236 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.797 nr_hugepages=1025 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:12.797 resv_hugepages=0 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.797 surplus_hugepages=0 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.797 anon_hugepages=0 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.797 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8053620 kB' 'MemAvailable: 9433348 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453468 kB' 'Inactive: 1265284 kB' 'Active(anon): 132608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123716 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132508 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71464 kB' 'KernelStack: 6220 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.798 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.799 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8053620 kB' 'MemUsed: 4188356 kB' 'SwapCached: 0 kB' 'Active: 453528 kB' 'Inactive: 1265284 kB' 'Active(anon): 132668 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1596608 kB' 'Mapped: 48672 kB' 'AnonPages: 123792 kB' 'Shmem: 10464 kB' 'KernelStack: 6236 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61044 kB' 'Slab: 132508 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.800 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.801 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.802 node0=1025 expecting 1025 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:12.802 00:04:12.802 real 0m0.548s 00:04:12.802 user 0m0.253s 00:04:12.802 sys 0m0.328s 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.802 22:34:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.802 ************************************ 00:04:12.802 END TEST odd_alloc 00:04:12.802 ************************************ 00:04:12.802 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:12.802 22:34:28 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:12.802 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.802 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.802 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.802 ************************************ 00:04:12.802 START TEST custom_alloc 00:04:12.802 ************************************ 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.802 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.061 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.061 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9102104 kB' 'MemAvailable: 10481832 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453836 kB' 'Inactive: 1265284 kB' 'Active(anon): 132976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123976 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132496 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6340 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.326 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.327 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9102104 kB' 'MemAvailable: 10481832 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453748 kB' 'Inactive: 1265284 kB' 'Active(anon): 132888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123832 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132492 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71448 kB' 'KernelStack: 6276 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.328 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.329 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9102104 kB' 'MemAvailable: 10481832 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453692 kB' 'Inactive: 1265284 kB' 'Active(anon): 132832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123808 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132496 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6284 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.330 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.331 nr_hugepages=512 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:13.331 resv_hugepages=0 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.331 surplus_hugepages=0 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.331 anon_hugepages=0 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.331 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9102104 kB' 'MemAvailable: 10481832 kB' 'Buffers: 2436 kB' 'Cached: 1594172 kB' 'SwapCached: 0 kB' 'Active: 453736 kB' 'Inactive: 1265284 kB' 'Active(anon): 132876 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123864 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132496 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6284 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.332 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9102104 kB' 'MemUsed: 3139872 kB' 'SwapCached: 0 kB' 'Active: 453724 kB' 'Inactive: 1265284 kB' 'Active(anon): 132864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1596608 kB' 'Mapped: 48616 kB' 'AnonPages: 123900 kB' 'Shmem: 10464 kB' 'KernelStack: 6300 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61044 kB' 'Slab: 132496 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.333 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.334 node0=512 expecting 512 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.334 22:34:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.335 00:04:13.335 real 0m0.547s 00:04:13.335 user 0m0.274s 00:04:13.335 sys 0m0.304s 00:04:13.335 22:34:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.335 22:34:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.335 ************************************ 00:04:13.335 END TEST custom_alloc 00:04:13.335 ************************************ 00:04:13.335 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.335 22:34:28 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.335 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.335 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.335 22:34:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.335 ************************************ 00:04:13.335 START TEST no_shrink_alloc 00:04:13.335 ************************************ 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.335 22:34:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.907 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.907 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8065748 kB' 'MemAvailable: 9445480 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 453956 kB' 'Inactive: 1265288 kB' 'Active(anon): 133096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 124224 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132436 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 6180 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.907 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8065748 kB' 'MemAvailable: 9445480 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 453508 kB' 'Inactive: 1265288 kB' 'Active(anon): 132648 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123812 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132436 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 6256 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.908 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.909 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8065748 kB' 'MemAvailable: 9445480 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 453520 kB' 'Inactive: 1265288 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123812 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132436 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 6256 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.910 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.911 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.912 nr_hugepages=1024 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.912 resv_hugepages=0 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.912 surplus_hugepages=0 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.912 anon_hugepages=0 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8065748 kB' 'MemAvailable: 9445480 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 453560 kB' 'Inactive: 1265288 kB' 'Active(anon): 132700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123812 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 61044 kB' 'Slab: 132436 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 6256 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.912 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.913 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8065748 kB' 'MemUsed: 4176228 kB' 'SwapCached: 0 kB' 'Active: 453260 kB' 'Inactive: 1265288 kB' 'Active(anon): 132400 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1596612 kB' 'Mapped: 48604 kB' 'AnonPages: 123516 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61044 kB' 'Slab: 132436 kB' 'SReclaimable: 61044 kB' 'SUnreclaim: 71392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.914 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.915 node0=1024 expecting 1024 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.915 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.487 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.487 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.487 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8077344 kB' 'MemAvailable: 9457072 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 448368 kB' 'Inactive: 1265288 kB' 'Active(anon): 127508 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118668 kB' 'Mapped: 47996 kB' 'Shmem: 10464 kB' 'KReclaimable: 61036 kB' 'Slab: 132200 kB' 'SReclaimable: 61036 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6100 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.487 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.488 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8079036 kB' 'MemAvailable: 9458764 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 448028 kB' 'Inactive: 1265288 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118292 kB' 'Mapped: 47864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61036 kB' 'Slab: 132200 kB' 'SReclaimable: 61036 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6128 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54468 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.489 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.490 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8078788 kB' 'MemAvailable: 9458516 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 448240 kB' 'Inactive: 1265288 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118276 kB' 'Mapped: 47864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61036 kB' 'Slab: 132200 kB' 'SReclaimable: 61036 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6128 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54452 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.491 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.492 nr_hugepages=1024 00:04:14.492 resv_hugepages=0 00:04:14.492 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.493 surplus_hugepages=0 00:04:14.493 anon_hugepages=0 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8078908 kB' 'MemAvailable: 9458636 kB' 'Buffers: 2436 kB' 'Cached: 1594176 kB' 'SwapCached: 0 kB' 'Active: 448060 kB' 'Inactive: 1265288 kB' 'Active(anon): 127200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118340 kB' 'Mapped: 47864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61036 kB' 'Slab: 132200 kB' 'SReclaimable: 61036 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6128 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54452 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.493 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.494 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8079568 kB' 'MemUsed: 4162408 kB' 'SwapCached: 0 kB' 'Active: 448052 kB' 'Inactive: 1265288 kB' 'Active(anon): 127192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265288 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1596612 kB' 'Mapped: 47864 kB' 'AnonPages: 118348 kB' 'Shmem: 10464 kB' 'KernelStack: 6128 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61036 kB' 'Slab: 132200 kB' 'SReclaimable: 61036 kB' 'SUnreclaim: 71164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.495 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.496 node0=1024 expecting 1024 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:14.496 00:04:14.496 real 0m1.088s 00:04:14.496 user 0m0.562s 00:04:14.496 sys 0m0.566s 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.496 22:34:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.496 ************************************ 00:04:14.496 END TEST no_shrink_alloc 00:04:14.496 ************************************ 00:04:14.496 22:34:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.496 22:34:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.496 00:04:14.496 real 0m4.651s 00:04:14.496 user 0m2.224s 00:04:14.496 sys 0m2.513s 00:04:14.496 ************************************ 00:04:14.496 END TEST hugepages 00:04:14.496 ************************************ 00:04:14.496 22:34:29 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.496 22:34:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.496 22:34:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.496 22:34:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.496 22:34:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.496 22:34:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.496 22:34:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.496 ************************************ 00:04:14.496 START TEST driver 00:04:14.496 ************************************ 00:04:14.496 22:34:30 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.758 * Looking for test storage... 00:04:14.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.759 22:34:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:14.759 22:34:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.759 22:34:30 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.324 22:34:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:15.324 22:34:30 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.324 22:34:30 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.324 22:34:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.324 ************************************ 00:04:15.324 START TEST guess_driver 00:04:15.324 ************************************ 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:15.324 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:15.324 Looking for driver=uio_pci_generic 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.324 22:34:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.891 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:15.891 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:15.891 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.149 22:34:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.715 00:04:16.715 real 0m1.490s 00:04:16.715 user 0m0.569s 00:04:16.715 sys 0m0.918s 00:04:16.715 22:34:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.715 ************************************ 00:04:16.715 END TEST guess_driver 00:04:16.715 ************************************ 00:04:16.715 22:34:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.715 22:34:32 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:16.715 ************************************ 00:04:16.715 END TEST driver 00:04:16.715 ************************************ 00:04:16.715 00:04:16.715 real 0m2.197s 00:04:16.715 user 0m0.805s 00:04:16.715 sys 0m1.442s 00:04:16.715 22:34:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.715 22:34:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.973 22:34:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:16.973 22:34:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:16.973 22:34:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.973 22:34:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.973 22:34:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.973 ************************************ 00:04:16.973 START TEST devices 00:04:16.973 ************************************ 00:04:16.973 22:34:32 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:16.973 * Looking for test storage... 00:04:16.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.973 22:34:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:16.973 22:34:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:16.973 22:34:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.973 22:34:32 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.539 22:34:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:17.539 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:17.539 22:34:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:17.539 22:34:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:17.798 No valid GPT data, bailing 00:04:17.798 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.798 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:17.798 22:34:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:17.798 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:17.798 22:34:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:17.798 22:34:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:17.798 22:34:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:17.799 No valid GPT data, bailing 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:17.799 22:34:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:17.799 22:34:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:17.799 22:34:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:17.799 No valid GPT data, bailing 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:17.799 22:34:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:17.799 22:34:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:17.799 22:34:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:17.799 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:17.799 22:34:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:18.057 No valid GPT data, bailing 00:04:18.057 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.057 22:34:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.057 22:34:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:18.057 22:34:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:18.057 22:34:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:18.057 22:34:33 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:18.057 22:34:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:18.057 22:34:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.057 22:34:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.057 22:34:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.057 ************************************ 00:04:18.057 START TEST nvme_mount 00:04:18.057 ************************************ 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.057 22:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:18.992 Creating new GPT entries in memory. 00:04:18.992 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.992 other utilities. 00:04:18.992 22:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.992 22:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.992 22:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.992 22:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.992 22:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:19.929 Creating new GPT entries in memory. 00:04:19.929 The operation has completed successfully. 00:04:19.929 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.929 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.929 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56952 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.187 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.446 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.446 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.446 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.446 22:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:20.446 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.705 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.705 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.705 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.705 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.705 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.705 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.964 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.964 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.964 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.964 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:20.964 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.223 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.223 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.223 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.223 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.482 22:34:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.741 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.001 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.001 00:04:22.001 real 0m3.938s 00:04:22.001 user 0m0.671s 00:04:22.001 sys 0m1.007s 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.001 22:34:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:22.001 ************************************ 00:04:22.001 END TEST nvme_mount 00:04:22.001 ************************************ 00:04:22.001 22:34:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:22.001 22:34:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:22.001 22:34:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.001 22:34:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.001 22:34:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.001 ************************************ 00:04:22.001 START TEST dm_mount 00:04:22.001 ************************************ 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.001 22:34:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:22.938 Creating new GPT entries in memory. 00:04:22.938 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.938 other utilities. 00:04:22.938 22:34:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.938 22:34:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.938 22:34:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.938 22:34:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.938 22:34:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:24.360 Creating new GPT entries in memory. 00:04:24.360 The operation has completed successfully. 00:04:24.360 22:34:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.360 22:34:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.360 22:34:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.360 22:34:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.360 22:34:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:24.927 The operation has completed successfully. 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57385 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.186 22:34:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.443 22:34:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.700 22:34:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:25.957 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:26.214 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:26.214 00:04:26.214 real 0m4.182s 00:04:26.214 user 0m0.457s 00:04:26.214 sys 0m0.697s 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.214 22:34:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:26.214 ************************************ 00:04:26.214 END TEST dm_mount 00:04:26.214 ************************************ 00:04:26.214 22:34:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.214 22:34:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.472 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.472 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.472 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.472 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.472 22:34:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.472 00:04:26.472 real 0m9.646s 00:04:26.472 user 0m1.744s 00:04:26.472 sys 0m2.309s 00:04:26.472 22:34:41 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.472 22:34:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.472 ************************************ 00:04:26.472 END TEST devices 00:04:26.472 ************************************ 00:04:26.472 22:34:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:26.472 00:04:26.472 real 0m21.465s 00:04:26.472 user 0m6.839s 00:04:26.472 sys 0m9.066s 00:04:26.472 22:34:41 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.472 22:34:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.472 ************************************ 00:04:26.472 END TEST setup.sh 00:04:26.472 ************************************ 00:04:26.472 22:34:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.472 22:34:42 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.404 Hugepages 00:04:27.404 node hugesize free / total 00:04:27.404 node0 1048576kB 0 / 0 00:04:27.404 node0 2048kB 2048 / 2048 00:04:27.404 00:04:27.404 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.404 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:27.404 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:27.404 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:27.404 22:34:42 -- spdk/autotest.sh@130 -- # uname -s 00:04:27.404 22:34:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:27.404 22:34:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:27.404 22:34:42 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.227 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.227 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.227 22:34:43 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:29.161 22:34:44 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:29.161 22:34:44 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:29.161 22:34:44 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:29.161 22:34:44 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:29.161 22:34:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:29.161 22:34:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:29.161 22:34:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.161 22:34:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.161 22:34:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:29.419 22:34:44 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:29.419 22:34:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:29.419 22:34:44 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.678 Waiting for block devices as requested 00:04:29.678 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.936 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.936 22:34:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:29.936 22:34:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:29.936 22:34:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:29.936 22:34:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:29.936 22:34:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:29.936 22:34:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:29.936 22:34:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:29.936 22:34:45 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:29.936 22:34:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:29.936 22:34:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:29.936 22:34:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:29.936 22:34:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:29.936 22:34:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:29.936 22:34:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:29.936 22:34:45 -- common/autotest_common.sh@1557 -- # continue 00:04:29.936 22:34:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:29.936 22:34:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:29.936 22:34:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.936 22:34:45 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:29.936 22:34:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.937 22:34:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:29.937 22:34:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.937 22:34:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:29.937 22:34:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:29.937 22:34:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:29.937 22:34:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:29.937 22:34:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:29.937 22:34:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:29.937 22:34:45 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:29.937 22:34:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:29.937 22:34:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:29.937 22:34:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:29.937 22:34:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:29.937 22:34:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:29.937 22:34:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:29.937 22:34:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:29.937 22:34:45 -- common/autotest_common.sh@1557 -- # continue 00:04:29.937 22:34:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:29.937 22:34:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.937 22:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.937 22:34:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:29.937 22:34:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.937 22:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.937 22:34:45 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.764 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.764 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.764 22:34:46 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:30.764 22:34:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.764 22:34:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.764 22:34:46 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:30.764 22:34:46 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:30.764 22:34:46 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.764 22:34:46 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:30.764 22:34:46 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:30.764 22:34:46 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:30.764 22:34:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:30.764 22:34:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:30.764 22:34:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.764 22:34:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.764 22:34:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:31.022 22:34:46 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:31.023 22:34:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:31.023 22:34:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:31.023 22:34:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:31.023 22:34:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:31.023 22:34:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.023 22:34:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:31.023 22:34:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:31.023 22:34:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:31.023 22:34:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.023 22:34:46 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:31.023 22:34:46 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:31.023 22:34:46 -- common/autotest_common.sh@1593 -- # return 0 00:04:31.023 22:34:46 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:31.023 22:34:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:31.023 22:34:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:31.023 22:34:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:31.023 22:34:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:31.023 22:34:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.023 22:34:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.023 22:34:46 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:31.023 22:34:46 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:31.023 22:34:46 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:31.023 22:34:46 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:31.023 22:34:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.023 22:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.023 22:34:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.023 ************************************ 00:04:31.023 START TEST env 00:04:31.023 ************************************ 00:04:31.023 22:34:46 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:31.023 * Looking for test storage... 00:04:31.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:31.023 22:34:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.023 22:34:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.023 22:34:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.023 22:34:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.023 ************************************ 00:04:31.023 START TEST env_memory 00:04:31.023 ************************************ 00:04:31.023 22:34:46 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.023 00:04:31.023 00:04:31.023 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.023 http://cunit.sourceforge.net/ 00:04:31.023 00:04:31.023 00:04:31.023 Suite: memory 00:04:31.023 Test: alloc and free memory map ...[2024-07-15 22:34:46.543347] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.023 passed 00:04:31.023 Test: mem map translation ...[2024-07-15 22:34:46.574143] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.023 [2024-07-15 22:34:46.574193] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.023 [2024-07-15 22:34:46.574249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.023 [2024-07-15 22:34:46.574260] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:31.281 passed 00:04:31.281 Test: mem map registration ...[2024-07-15 22:34:46.638112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:31.281 [2024-07-15 22:34:46.638159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:31.281 passed 00:04:31.281 Test: mem map adjacent registrations ...passed 00:04:31.281 00:04:31.281 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.281 suites 1 1 n/a 0 0 00:04:31.281 tests 4 4 4 0 0 00:04:31.281 asserts 152 152 152 0 n/a 00:04:31.281 00:04:31.281 Elapsed time = 0.213 seconds 00:04:31.281 00:04:31.281 real 0m0.228s 00:04:31.281 user 0m0.214s 00:04:31.281 sys 0m0.012s 00:04:31.281 22:34:46 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.281 ************************************ 00:04:31.281 END TEST env_memory 00:04:31.281 ************************************ 00:04:31.281 22:34:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:31.281 22:34:46 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.281 22:34:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:31.281 22:34:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.281 22:34:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.281 22:34:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.281 ************************************ 00:04:31.281 START TEST env_vtophys 00:04:31.281 ************************************ 00:04:31.281 22:34:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:31.281 EAL: lib.eal log level changed from notice to debug 00:04:31.281 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 1 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 2 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 3 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 4 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 5 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 6 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 7 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 8 as core 0 on socket 0 00:04:31.281 EAL: Detected lcore 9 as core 0 on socket 0 00:04:31.281 EAL: Maximum logical cores by configuration: 128 00:04:31.281 EAL: Detected CPU lcores: 10 00:04:31.281 EAL: Detected NUMA nodes: 1 00:04:31.281 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:31.281 EAL: Detected shared linkage of DPDK 00:04:31.281 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.281 EAL: Selected IOVA mode 'PA' 00:04:31.281 EAL: Probing VFIO support... 00:04:31.281 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:31.281 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:31.281 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.281 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.281 EAL: Setting up physically contiguous memory... 00:04:31.281 EAL: Setting maximum number of open files to 524288 00:04:31.281 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.281 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.282 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.282 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.282 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.282 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.282 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.282 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.282 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.282 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.282 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.282 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.282 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.282 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.282 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.282 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.282 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.282 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.282 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.282 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.282 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.282 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.282 EAL: Hugepages will be freed exactly as allocated. 00:04:31.282 EAL: No shared files mode enabled, IPC is disabled 00:04:31.282 EAL: No shared files mode enabled, IPC is disabled 00:04:31.540 EAL: TSC frequency is ~2200000 KHz 00:04:31.540 EAL: Main lcore 0 is ready (tid=7f30a5ce2a00;cpuset=[0]) 00:04:31.540 EAL: Trying to obtain current memory policy. 00:04:31.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.540 EAL: Restoring previous memory policy: 0 00:04:31.540 EAL: request: mp_malloc_sync 00:04:31.540 EAL: No shared files mode enabled, IPC is disabled 00:04:31.540 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.540 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:31.540 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:31.540 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.540 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:31.540 00:04:31.540 00:04:31.540 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.540 http://cunit.sourceforge.net/ 00:04:31.540 00:04:31.540 00:04:31.540 Suite: components_suite 00:04:31.540 Test: vtophys_malloc_test ...passed 00:04:31.540 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.540 EAL: Restoring previous memory policy: 4 00:04:31.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.540 EAL: request: mp_malloc_sync 00:04:31.540 EAL: No shared files mode enabled, IPC is disabled 00:04:31.540 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.540 EAL: request: mp_malloc_sync 00:04:31.540 EAL: No shared files mode enabled, IPC is disabled 00:04:31.540 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.540 EAL: Trying to obtain current memory policy. 00:04:31.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.540 EAL: Restoring previous memory policy: 4 00:04:31.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.540 EAL: request: mp_malloc_sync 00:04:31.540 EAL: No shared files mode enabled, IPC is disabled 00:04:31.540 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.541 EAL: Trying to obtain current memory policy. 00:04:31.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.541 EAL: Restoring previous memory policy: 4 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.541 EAL: Trying to obtain current memory policy. 00:04:31.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.541 EAL: Restoring previous memory policy: 4 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.541 EAL: Trying to obtain current memory policy. 00:04:31.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.541 EAL: Restoring previous memory policy: 4 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.541 EAL: Trying to obtain current memory policy. 00:04:31.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.541 EAL: Restoring previous memory policy: 4 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.541 EAL: Trying to obtain current memory policy. 00:04:31.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.541 EAL: Restoring previous memory policy: 4 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.541 EAL: Trying to obtain current memory policy. 00:04:31.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.799 EAL: Restoring previous memory policy: 4 00:04:31.799 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.799 EAL: request: mp_malloc_sync 00:04:31.799 EAL: No shared files mode enabled, IPC is disabled 00:04:31.799 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.799 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.799 EAL: request: mp_malloc_sync 00:04:31.799 EAL: No shared files mode enabled, IPC is disabled 00:04:31.799 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.799 EAL: Trying to obtain current memory policy. 00:04:31.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.057 EAL: Restoring previous memory policy: 4 00:04:32.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.057 EAL: request: mp_malloc_sync 00:04:32.057 EAL: No shared files mode enabled, IPC is disabled 00:04:32.057 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.057 EAL: request: mp_malloc_sync 00:04:32.057 EAL: No shared files mode enabled, IPC is disabled 00:04:32.057 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.057 EAL: Trying to obtain current memory policy. 00:04:32.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.316 EAL: Restoring previous memory policy: 4 00:04:32.316 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.316 EAL: request: mp_malloc_sync 00:04:32.316 EAL: No shared files mode enabled, IPC is disabled 00:04:32.316 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.575 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.834 passed 00:04:32.834 00:04:32.834 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.834 suites 1 1 n/a 0 0 00:04:32.834 tests 2 2 2 0 0 00:04:32.834 asserts 5274 5274 5274 0 n/a 00:04:32.834 00:04:32.834 Elapsed time = 1.306 seconds 00:04:32.834 EAL: request: mp_malloc_sync 00:04:32.834 EAL: No shared files mode enabled, IPC is disabled 00:04:32.834 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.834 EAL: request: mp_malloc_sync 00:04:32.834 EAL: No shared files mode enabled, IPC is disabled 00:04:32.834 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.834 EAL: No shared files mode enabled, IPC is disabled 00:04:32.834 EAL: No shared files mode enabled, IPC is disabled 00:04:32.834 EAL: No shared files mode enabled, IPC is disabled 00:04:32.834 00:04:32.834 real 0m1.506s 00:04:32.834 user 0m0.833s 00:04:32.834 sys 0m0.539s 00:04:32.834 22:34:48 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.834 ************************************ 00:04:32.834 END TEST env_vtophys 00:04:32.834 ************************************ 00:04:32.834 22:34:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:32.834 22:34:48 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.834 22:34:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.834 22:34:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.834 22:34:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.834 22:34:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.834 ************************************ 00:04:32.834 START TEST env_pci 00:04:32.834 ************************************ 00:04:32.834 22:34:48 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.834 00:04:32.834 00:04:32.834 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.834 http://cunit.sourceforge.net/ 00:04:32.834 00:04:32.834 00:04:32.834 Suite: pci 00:04:32.834 Test: pci_hook ...[2024-07-15 22:34:48.356678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58574 has claimed it 00:04:32.834 passed 00:04:32.834 00:04:32.834 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.834 suites 1 1 n/a 0 0 00:04:32.834 tests 1 1 1 0 0 00:04:32.834 asserts 25 25 25 0 n/a 00:04:32.834 00:04:32.834 Elapsed time = 0.002 seconds 00:04:32.834 EAL: Cannot find device (10000:00:01.0) 00:04:32.834 EAL: Failed to attach device on primary process 00:04:32.834 00:04:32.834 real 0m0.022s 00:04:32.834 user 0m0.014s 00:04:32.834 sys 0m0.008s 00:04:32.834 22:34:48 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.834 ************************************ 00:04:32.834 END TEST env_pci 00:04:32.834 ************************************ 00:04:32.834 22:34:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 22:34:48 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.094 22:34:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.094 22:34:48 env -- env/env.sh@15 -- # uname 00:04:33.094 22:34:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.094 22:34:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.094 22:34:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.094 22:34:48 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:33.094 22:34:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.094 22:34:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 ************************************ 00:04:33.094 START TEST env_dpdk_post_init 00:04:33.094 ************************************ 00:04:33.094 22:34:48 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.094 EAL: Detected CPU lcores: 10 00:04:33.094 EAL: Detected NUMA nodes: 1 00:04:33.094 EAL: Detected shared linkage of DPDK 00:04:33.094 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.094 EAL: Selected IOVA mode 'PA' 00:04:33.094 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.094 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:33.094 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:33.094 Starting DPDK initialization... 00:04:33.094 Starting SPDK post initialization... 00:04:33.094 SPDK NVMe probe 00:04:33.094 Attaching to 0000:00:10.0 00:04:33.094 Attaching to 0000:00:11.0 00:04:33.094 Attached to 0000:00:10.0 00:04:33.094 Attached to 0000:00:11.0 00:04:33.094 Cleaning up... 00:04:33.094 00:04:33.094 real 0m0.176s 00:04:33.094 user 0m0.041s 00:04:33.094 sys 0m0.036s 00:04:33.094 22:34:48 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.094 ************************************ 00:04:33.094 END TEST env_dpdk_post_init 00:04:33.094 ************************************ 00:04:33.094 22:34:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 22:34:48 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.094 22:34:48 env -- env/env.sh@26 -- # uname 00:04:33.094 22:34:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.094 22:34:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.094 22:34:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.094 22:34:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.094 22:34:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 ************************************ 00:04:33.094 START TEST env_mem_callbacks 00:04:33.094 ************************************ 00:04:33.094 22:34:48 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.353 EAL: Detected CPU lcores: 10 00:04:33.353 EAL: Detected NUMA nodes: 1 00:04:33.353 EAL: Detected shared linkage of DPDK 00:04:33.353 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.353 EAL: Selected IOVA mode 'PA' 00:04:33.353 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.353 00:04:33.353 00:04:33.353 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.353 http://cunit.sourceforge.net/ 00:04:33.353 00:04:33.353 00:04:33.353 Suite: memory 00:04:33.353 Test: test ... 00:04:33.353 register 0x200000200000 2097152 00:04:33.353 malloc 3145728 00:04:33.353 register 0x200000400000 4194304 00:04:33.353 buf 0x200000500000 len 3145728 PASSED 00:04:33.353 malloc 64 00:04:33.353 buf 0x2000004fff40 len 64 PASSED 00:04:33.353 malloc 4194304 00:04:33.353 register 0x200000800000 6291456 00:04:33.353 buf 0x200000a00000 len 4194304 PASSED 00:04:33.353 free 0x200000500000 3145728 00:04:33.353 free 0x2000004fff40 64 00:04:33.353 unregister 0x200000400000 4194304 PASSED 00:04:33.353 free 0x200000a00000 4194304 00:04:33.353 unregister 0x200000800000 6291456 PASSED 00:04:33.353 malloc 8388608 00:04:33.353 register 0x200000400000 10485760 00:04:33.353 buf 0x200000600000 len 8388608 PASSED 00:04:33.353 free 0x200000600000 8388608 00:04:33.353 unregister 0x200000400000 10485760 PASSED 00:04:33.353 passed 00:04:33.353 00:04:33.353 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.353 suites 1 1 n/a 0 0 00:04:33.353 tests 1 1 1 0 0 00:04:33.353 asserts 15 15 15 0 n/a 00:04:33.353 00:04:33.353 Elapsed time = 0.009 seconds 00:04:33.353 00:04:33.353 real 0m0.147s 00:04:33.353 user 0m0.019s 00:04:33.353 sys 0m0.026s 00:04:33.353 22:34:48 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.353 ************************************ 00:04:33.353 END TEST env_mem_callbacks 00:04:33.353 ************************************ 00:04:33.353 22:34:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.353 22:34:48 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.353 00:04:33.353 real 0m2.426s 00:04:33.353 user 0m1.234s 00:04:33.353 sys 0m0.836s 00:04:33.353 22:34:48 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.353 22:34:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.353 ************************************ 00:04:33.353 END TEST env 00:04:33.353 ************************************ 00:04:33.353 22:34:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.353 22:34:48 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:33.353 22:34:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.353 22:34:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.353 22:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:33.353 ************************************ 00:04:33.353 START TEST rpc 00:04:33.353 ************************************ 00:04:33.353 22:34:48 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:33.612 * Looking for test storage... 00:04:33.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.612 22:34:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58689 00:04:33.612 22:34:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.612 22:34:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58689 00:04:33.612 22:34:48 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:33.612 22:34:48 rpc -- common/autotest_common.sh@829 -- # '[' -z 58689 ']' 00:04:33.612 22:34:48 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.612 22:34:48 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.612 22:34:48 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.612 22:34:48 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.612 22:34:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.612 [2024-07-15 22:34:49.025949] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:33.612 [2024-07-15 22:34:49.026050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58689 ] 00:04:33.612 [2024-07-15 22:34:49.165742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.870 [2024-07-15 22:34:49.270602] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.870 [2024-07-15 22:34:49.270663] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58689' to capture a snapshot of events at runtime. 00:04:33.870 [2024-07-15 22:34:49.270674] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.870 [2024-07-15 22:34:49.270682] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.870 [2024-07-15 22:34:49.270689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58689 for offline analysis/debug. 00:04:33.870 [2024-07-15 22:34:49.270712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.870 [2024-07-15 22:34:49.325492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.437 22:34:49 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.437 22:34:49 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:34.437 22:34:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.437 22:34:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.437 22:34:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.437 22:34:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.437 22:34:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.437 22:34:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.437 22:34:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.437 ************************************ 00:04:34.437 START TEST rpc_integrity 00:04:34.437 ************************************ 00:04:34.437 22:34:49 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:34.437 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.437 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.437 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.696 { 00:04:34.696 "name": "Malloc0", 00:04:34.696 "aliases": [ 00:04:34.696 "275e3bc0-71ac-4552-85c2-c7b13707cf44" 00:04:34.696 ], 00:04:34.696 "product_name": "Malloc disk", 00:04:34.696 "block_size": 512, 00:04:34.696 "num_blocks": 16384, 00:04:34.696 "uuid": "275e3bc0-71ac-4552-85c2-c7b13707cf44", 00:04:34.696 "assigned_rate_limits": { 00:04:34.696 "rw_ios_per_sec": 0, 00:04:34.696 "rw_mbytes_per_sec": 0, 00:04:34.696 "r_mbytes_per_sec": 0, 00:04:34.696 "w_mbytes_per_sec": 0 00:04:34.696 }, 00:04:34.696 "claimed": false, 00:04:34.696 "zoned": false, 00:04:34.696 "supported_io_types": { 00:04:34.696 "read": true, 00:04:34.696 "write": true, 00:04:34.696 "unmap": true, 00:04:34.696 "flush": true, 00:04:34.696 "reset": true, 00:04:34.696 "nvme_admin": false, 00:04:34.696 "nvme_io": false, 00:04:34.696 "nvme_io_md": false, 00:04:34.696 "write_zeroes": true, 00:04:34.696 "zcopy": true, 00:04:34.696 "get_zone_info": false, 00:04:34.696 "zone_management": false, 00:04:34.696 "zone_append": false, 00:04:34.696 "compare": false, 00:04:34.696 "compare_and_write": false, 00:04:34.696 "abort": true, 00:04:34.696 "seek_hole": false, 00:04:34.696 "seek_data": false, 00:04:34.696 "copy": true, 00:04:34.696 "nvme_iov_md": false 00:04:34.696 }, 00:04:34.696 "memory_domains": [ 00:04:34.696 { 00:04:34.696 "dma_device_id": "system", 00:04:34.696 "dma_device_type": 1 00:04:34.696 }, 00:04:34.696 { 00:04:34.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.696 "dma_device_type": 2 00:04:34.696 } 00:04:34.696 ], 00:04:34.696 "driver_specific": {} 00:04:34.696 } 00:04:34.696 ]' 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 [2024-07-15 22:34:50.173937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.696 [2024-07-15 22:34:50.174042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.696 [2024-07-15 22:34:50.174059] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1643890 00:04:34.696 [2024-07-15 22:34:50.174068] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.696 [2024-07-15 22:34:50.175654] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.696 [2024-07-15 22:34:50.175691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.696 Passthru0 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.696 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.696 { 00:04:34.696 "name": "Malloc0", 00:04:34.696 "aliases": [ 00:04:34.696 "275e3bc0-71ac-4552-85c2-c7b13707cf44" 00:04:34.696 ], 00:04:34.696 "product_name": "Malloc disk", 00:04:34.696 "block_size": 512, 00:04:34.696 "num_blocks": 16384, 00:04:34.696 "uuid": "275e3bc0-71ac-4552-85c2-c7b13707cf44", 00:04:34.696 "assigned_rate_limits": { 00:04:34.696 "rw_ios_per_sec": 0, 00:04:34.696 "rw_mbytes_per_sec": 0, 00:04:34.696 "r_mbytes_per_sec": 0, 00:04:34.696 "w_mbytes_per_sec": 0 00:04:34.696 }, 00:04:34.696 "claimed": true, 00:04:34.696 "claim_type": "exclusive_write", 00:04:34.696 "zoned": false, 00:04:34.696 "supported_io_types": { 00:04:34.696 "read": true, 00:04:34.696 "write": true, 00:04:34.696 "unmap": true, 00:04:34.696 "flush": true, 00:04:34.696 "reset": true, 00:04:34.696 "nvme_admin": false, 00:04:34.696 "nvme_io": false, 00:04:34.696 "nvme_io_md": false, 00:04:34.696 "write_zeroes": true, 00:04:34.696 "zcopy": true, 00:04:34.696 "get_zone_info": false, 00:04:34.696 "zone_management": false, 00:04:34.696 "zone_append": false, 00:04:34.696 "compare": false, 00:04:34.696 "compare_and_write": false, 00:04:34.696 "abort": true, 00:04:34.696 "seek_hole": false, 00:04:34.696 "seek_data": false, 00:04:34.696 "copy": true, 00:04:34.696 "nvme_iov_md": false 00:04:34.696 }, 00:04:34.696 "memory_domains": [ 00:04:34.696 { 00:04:34.696 "dma_device_id": "system", 00:04:34.696 "dma_device_type": 1 00:04:34.696 }, 00:04:34.696 { 00:04:34.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.696 "dma_device_type": 2 00:04:34.696 } 00:04:34.696 ], 00:04:34.696 "driver_specific": {} 00:04:34.696 }, 00:04:34.696 { 00:04:34.696 "name": "Passthru0", 00:04:34.696 "aliases": [ 00:04:34.696 "f4e7f039-2021-5c7c-8aa9-fa99e261bdeb" 00:04:34.696 ], 00:04:34.696 "product_name": "passthru", 00:04:34.696 "block_size": 512, 00:04:34.696 "num_blocks": 16384, 00:04:34.696 "uuid": "f4e7f039-2021-5c7c-8aa9-fa99e261bdeb", 00:04:34.696 "assigned_rate_limits": { 00:04:34.696 "rw_ios_per_sec": 0, 00:04:34.696 "rw_mbytes_per_sec": 0, 00:04:34.696 "r_mbytes_per_sec": 0, 00:04:34.696 "w_mbytes_per_sec": 0 00:04:34.696 }, 00:04:34.696 "claimed": false, 00:04:34.696 "zoned": false, 00:04:34.696 "supported_io_types": { 00:04:34.696 "read": true, 00:04:34.696 "write": true, 00:04:34.696 "unmap": true, 00:04:34.696 "flush": true, 00:04:34.697 "reset": true, 00:04:34.697 "nvme_admin": false, 00:04:34.697 "nvme_io": false, 00:04:34.697 "nvme_io_md": false, 00:04:34.697 "write_zeroes": true, 00:04:34.697 "zcopy": true, 00:04:34.697 "get_zone_info": false, 00:04:34.697 "zone_management": false, 00:04:34.697 "zone_append": false, 00:04:34.697 "compare": false, 00:04:34.697 "compare_and_write": false, 00:04:34.697 "abort": true, 00:04:34.697 "seek_hole": false, 00:04:34.697 "seek_data": false, 00:04:34.697 "copy": true, 00:04:34.697 "nvme_iov_md": false 00:04:34.697 }, 00:04:34.697 "memory_domains": [ 00:04:34.697 { 00:04:34.697 "dma_device_id": "system", 00:04:34.697 "dma_device_type": 1 00:04:34.697 }, 00:04:34.697 { 00:04:34.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.697 "dma_device_type": 2 00:04:34.697 } 00:04:34.697 ], 00:04:34.697 "driver_specific": { 00:04:34.697 "passthru": { 00:04:34.697 "name": "Passthru0", 00:04:34.697 "base_bdev_name": "Malloc0" 00:04:34.697 } 00:04:34.697 } 00:04:34.697 } 00:04:34.697 ]' 00:04:34.697 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.697 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.697 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.697 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.697 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.955 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.956 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.956 22:34:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.956 00:04:34.956 real 0m0.327s 00:04:34.956 user 0m0.215s 00:04:34.956 sys 0m0.044s 00:04:34.956 ************************************ 00:04:34.956 END TEST rpc_integrity 00:04:34.956 ************************************ 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.956 22:34:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.956 22:34:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.956 22:34:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.956 22:34:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 ************************************ 00:04:34.956 START TEST rpc_plugins 00:04:34.956 ************************************ 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.956 { 00:04:34.956 "name": "Malloc1", 00:04:34.956 "aliases": [ 00:04:34.956 "904b32e7-fc00-4d5f-b232-94f5089f9ebd" 00:04:34.956 ], 00:04:34.956 "product_name": "Malloc disk", 00:04:34.956 "block_size": 4096, 00:04:34.956 "num_blocks": 256, 00:04:34.956 "uuid": "904b32e7-fc00-4d5f-b232-94f5089f9ebd", 00:04:34.956 "assigned_rate_limits": { 00:04:34.956 "rw_ios_per_sec": 0, 00:04:34.956 "rw_mbytes_per_sec": 0, 00:04:34.956 "r_mbytes_per_sec": 0, 00:04:34.956 "w_mbytes_per_sec": 0 00:04:34.956 }, 00:04:34.956 "claimed": false, 00:04:34.956 "zoned": false, 00:04:34.956 "supported_io_types": { 00:04:34.956 "read": true, 00:04:34.956 "write": true, 00:04:34.956 "unmap": true, 00:04:34.956 "flush": true, 00:04:34.956 "reset": true, 00:04:34.956 "nvme_admin": false, 00:04:34.956 "nvme_io": false, 00:04:34.956 "nvme_io_md": false, 00:04:34.956 "write_zeroes": true, 00:04:34.956 "zcopy": true, 00:04:34.956 "get_zone_info": false, 00:04:34.956 "zone_management": false, 00:04:34.956 "zone_append": false, 00:04:34.956 "compare": false, 00:04:34.956 "compare_and_write": false, 00:04:34.956 "abort": true, 00:04:34.956 "seek_hole": false, 00:04:34.956 "seek_data": false, 00:04:34.956 "copy": true, 00:04:34.956 "nvme_iov_md": false 00:04:34.956 }, 00:04:34.956 "memory_domains": [ 00:04:34.956 { 00:04:34.956 "dma_device_id": "system", 00:04:34.956 "dma_device_type": 1 00:04:34.956 }, 00:04:34.956 { 00:04:34.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.956 "dma_device_type": 2 00:04:34.956 } 00:04:34.956 ], 00:04:34.956 "driver_specific": {} 00:04:34.956 } 00:04:34.956 ]' 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.956 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.956 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.215 22:34:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.215 00:04:35.215 real 0m0.153s 00:04:35.215 user 0m0.104s 00:04:35.215 sys 0m0.015s 00:04:35.215 ************************************ 00:04:35.215 END TEST rpc_plugins 00:04:35.215 ************************************ 00:04:35.215 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.215 22:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.215 22:34:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.215 22:34:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.215 22:34:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.215 22:34:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.215 22:34:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.215 ************************************ 00:04:35.215 START TEST rpc_trace_cmd_test 00:04:35.215 ************************************ 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.215 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58689", 00:04:35.215 "tpoint_group_mask": "0x8", 00:04:35.215 "iscsi_conn": { 00:04:35.215 "mask": "0x2", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "scsi": { 00:04:35.215 "mask": "0x4", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "bdev": { 00:04:35.215 "mask": "0x8", 00:04:35.215 "tpoint_mask": "0xffffffffffffffff" 00:04:35.215 }, 00:04:35.215 "nvmf_rdma": { 00:04:35.215 "mask": "0x10", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "nvmf_tcp": { 00:04:35.215 "mask": "0x20", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "ftl": { 00:04:35.215 "mask": "0x40", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "blobfs": { 00:04:35.215 "mask": "0x80", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "dsa": { 00:04:35.215 "mask": "0x200", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "thread": { 00:04:35.215 "mask": "0x400", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "nvme_pcie": { 00:04:35.215 "mask": "0x800", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "iaa": { 00:04:35.215 "mask": "0x1000", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "nvme_tcp": { 00:04:35.215 "mask": "0x2000", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "bdev_nvme": { 00:04:35.215 "mask": "0x4000", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 }, 00:04:35.215 "sock": { 00:04:35.215 "mask": "0x8000", 00:04:35.215 "tpoint_mask": "0x0" 00:04:35.215 } 00:04:35.215 }' 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.215 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.216 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.475 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.475 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.475 22:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.475 00:04:35.475 real 0m0.288s 00:04:35.475 user 0m0.253s 00:04:35.475 sys 0m0.026s 00:04:35.475 ************************************ 00:04:35.475 END TEST rpc_trace_cmd_test 00:04:35.475 ************************************ 00:04:35.475 22:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.475 22:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.475 22:34:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.475 22:34:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.475 22:34:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.475 22:34:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.475 22:34:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.475 22:34:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.475 22:34:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.475 ************************************ 00:04:35.475 START TEST rpc_daemon_integrity 00:04:35.475 ************************************ 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.475 22:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.475 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.475 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.475 { 00:04:35.475 "name": "Malloc2", 00:04:35.475 "aliases": [ 00:04:35.475 "19861a20-0345-4ca7-8e27-65af9e95e2dc" 00:04:35.475 ], 00:04:35.475 "product_name": "Malloc disk", 00:04:35.475 "block_size": 512, 00:04:35.475 "num_blocks": 16384, 00:04:35.475 "uuid": "19861a20-0345-4ca7-8e27-65af9e95e2dc", 00:04:35.475 "assigned_rate_limits": { 00:04:35.475 "rw_ios_per_sec": 0, 00:04:35.475 "rw_mbytes_per_sec": 0, 00:04:35.475 "r_mbytes_per_sec": 0, 00:04:35.475 "w_mbytes_per_sec": 0 00:04:35.475 }, 00:04:35.475 "claimed": false, 00:04:35.475 "zoned": false, 00:04:35.475 "supported_io_types": { 00:04:35.475 "read": true, 00:04:35.475 "write": true, 00:04:35.475 "unmap": true, 00:04:35.475 "flush": true, 00:04:35.475 "reset": true, 00:04:35.475 "nvme_admin": false, 00:04:35.475 "nvme_io": false, 00:04:35.475 "nvme_io_md": false, 00:04:35.475 "write_zeroes": true, 00:04:35.475 "zcopy": true, 00:04:35.475 "get_zone_info": false, 00:04:35.475 "zone_management": false, 00:04:35.475 "zone_append": false, 00:04:35.475 "compare": false, 00:04:35.475 "compare_and_write": false, 00:04:35.475 "abort": true, 00:04:35.475 "seek_hole": false, 00:04:35.475 "seek_data": false, 00:04:35.475 "copy": true, 00:04:35.475 "nvme_iov_md": false 00:04:35.475 }, 00:04:35.475 "memory_domains": [ 00:04:35.475 { 00:04:35.475 "dma_device_id": "system", 00:04:35.475 "dma_device_type": 1 00:04:35.475 }, 00:04:35.475 { 00:04:35.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.475 "dma_device_type": 2 00:04:35.475 } 00:04:35.475 ], 00:04:35.475 "driver_specific": {} 00:04:35.475 } 00:04:35.475 ]' 00:04:35.475 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 [2024-07-15 22:34:51.070770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.735 [2024-07-15 22:34:51.070822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.735 [2024-07-15 22:34:51.070840] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1643100 00:04:35.735 [2024-07-15 22:34:51.070850] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.735 [2024-07-15 22:34:51.072423] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.735 [2024-07-15 22:34:51.072455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.735 Passthru0 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.735 { 00:04:35.735 "name": "Malloc2", 00:04:35.735 "aliases": [ 00:04:35.735 "19861a20-0345-4ca7-8e27-65af9e95e2dc" 00:04:35.735 ], 00:04:35.735 "product_name": "Malloc disk", 00:04:35.735 "block_size": 512, 00:04:35.735 "num_blocks": 16384, 00:04:35.735 "uuid": "19861a20-0345-4ca7-8e27-65af9e95e2dc", 00:04:35.735 "assigned_rate_limits": { 00:04:35.735 "rw_ios_per_sec": 0, 00:04:35.735 "rw_mbytes_per_sec": 0, 00:04:35.735 "r_mbytes_per_sec": 0, 00:04:35.735 "w_mbytes_per_sec": 0 00:04:35.735 }, 00:04:35.735 "claimed": true, 00:04:35.735 "claim_type": "exclusive_write", 00:04:35.735 "zoned": false, 00:04:35.735 "supported_io_types": { 00:04:35.735 "read": true, 00:04:35.735 "write": true, 00:04:35.735 "unmap": true, 00:04:35.735 "flush": true, 00:04:35.735 "reset": true, 00:04:35.735 "nvme_admin": false, 00:04:35.735 "nvme_io": false, 00:04:35.735 "nvme_io_md": false, 00:04:35.735 "write_zeroes": true, 00:04:35.735 "zcopy": true, 00:04:35.735 "get_zone_info": false, 00:04:35.735 "zone_management": false, 00:04:35.735 "zone_append": false, 00:04:35.735 "compare": false, 00:04:35.735 "compare_and_write": false, 00:04:35.735 "abort": true, 00:04:35.735 "seek_hole": false, 00:04:35.735 "seek_data": false, 00:04:35.735 "copy": true, 00:04:35.735 "nvme_iov_md": false 00:04:35.735 }, 00:04:35.735 "memory_domains": [ 00:04:35.735 { 00:04:35.735 "dma_device_id": "system", 00:04:35.735 "dma_device_type": 1 00:04:35.735 }, 00:04:35.735 { 00:04:35.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.735 "dma_device_type": 2 00:04:35.735 } 00:04:35.735 ], 00:04:35.735 "driver_specific": {} 00:04:35.735 }, 00:04:35.735 { 00:04:35.735 "name": "Passthru0", 00:04:35.735 "aliases": [ 00:04:35.735 "82c2c0c5-43bc-5e8b-9255-6035d4bbac24" 00:04:35.735 ], 00:04:35.735 "product_name": "passthru", 00:04:35.735 "block_size": 512, 00:04:35.735 "num_blocks": 16384, 00:04:35.735 "uuid": "82c2c0c5-43bc-5e8b-9255-6035d4bbac24", 00:04:35.735 "assigned_rate_limits": { 00:04:35.735 "rw_ios_per_sec": 0, 00:04:35.735 "rw_mbytes_per_sec": 0, 00:04:35.735 "r_mbytes_per_sec": 0, 00:04:35.735 "w_mbytes_per_sec": 0 00:04:35.735 }, 00:04:35.735 "claimed": false, 00:04:35.735 "zoned": false, 00:04:35.735 "supported_io_types": { 00:04:35.735 "read": true, 00:04:35.735 "write": true, 00:04:35.735 "unmap": true, 00:04:35.735 "flush": true, 00:04:35.735 "reset": true, 00:04:35.735 "nvme_admin": false, 00:04:35.735 "nvme_io": false, 00:04:35.735 "nvme_io_md": false, 00:04:35.735 "write_zeroes": true, 00:04:35.735 "zcopy": true, 00:04:35.735 "get_zone_info": false, 00:04:35.735 "zone_management": false, 00:04:35.735 "zone_append": false, 00:04:35.735 "compare": false, 00:04:35.735 "compare_and_write": false, 00:04:35.735 "abort": true, 00:04:35.735 "seek_hole": false, 00:04:35.735 "seek_data": false, 00:04:35.735 "copy": true, 00:04:35.735 "nvme_iov_md": false 00:04:35.735 }, 00:04:35.735 "memory_domains": [ 00:04:35.735 { 00:04:35.735 "dma_device_id": "system", 00:04:35.735 "dma_device_type": 1 00:04:35.735 }, 00:04:35.735 { 00:04:35.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.735 "dma_device_type": 2 00:04:35.735 } 00:04:35.735 ], 00:04:35.735 "driver_specific": { 00:04:35.735 "passthru": { 00:04:35.735 "name": "Passthru0", 00:04:35.735 "base_bdev_name": "Malloc2" 00:04:35.735 } 00:04:35.735 } 00:04:35.735 } 00:04:35.735 ]' 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.735 00:04:35.735 real 0m0.307s 00:04:35.735 user 0m0.204s 00:04:35.735 sys 0m0.040s 00:04:35.735 ************************************ 00:04:35.735 END TEST rpc_daemon_integrity 00:04:35.735 ************************************ 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.735 22:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 22:34:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.735 22:34:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.735 22:34:51 rpc -- rpc/rpc.sh@84 -- # killprocess 58689 00:04:35.735 22:34:51 rpc -- common/autotest_common.sh@948 -- # '[' -z 58689 ']' 00:04:35.735 22:34:51 rpc -- common/autotest_common.sh@952 -- # kill -0 58689 00:04:35.735 22:34:51 rpc -- common/autotest_common.sh@953 -- # uname 00:04:35.735 22:34:51 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.735 22:34:51 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58689 00:04:35.994 22:34:51 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.994 killing process with pid 58689 00:04:35.994 22:34:51 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.994 22:34:51 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58689' 00:04:35.994 22:34:51 rpc -- common/autotest_common.sh@967 -- # kill 58689 00:04:35.994 22:34:51 rpc -- common/autotest_common.sh@972 -- # wait 58689 00:04:36.294 00:04:36.294 real 0m2.805s 00:04:36.294 user 0m3.649s 00:04:36.294 sys 0m0.668s 00:04:36.294 22:34:51 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.294 ************************************ 00:04:36.294 END TEST rpc 00:04:36.294 ************************************ 00:04:36.294 22:34:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.294 22:34:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.294 22:34:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:36.294 22:34:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.294 22:34:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.294 22:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.294 ************************************ 00:04:36.294 START TEST skip_rpc 00:04:36.294 ************************************ 00:04:36.294 22:34:51 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:36.294 * Looking for test storage... 00:04:36.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.294 22:34:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.294 22:34:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:36.294 22:34:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.294 22:34:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.294 22:34:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.294 22:34:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.294 ************************************ 00:04:36.294 START TEST skip_rpc 00:04:36.294 ************************************ 00:04:36.294 22:34:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:36.294 22:34:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58881 00:04:36.294 22:34:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.294 22:34:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.294 22:34:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.553 [2024-07-15 22:34:51.890580] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:36.553 [2024-07-15 22:34:51.890686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58881 ] 00:04:36.553 [2024-07-15 22:34:52.029603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.812 [2024-07-15 22:34:52.132436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.812 [2024-07-15 22:34:52.186182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:42.080 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58881 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58881 ']' 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58881 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58881 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.081 killing process with pid 58881 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58881' 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58881 00:04:42.081 22:34:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58881 00:04:42.081 00:04:42.081 real 0m5.476s 00:04:42.081 user 0m5.086s 00:04:42.081 sys 0m0.292s 00:04:42.081 ************************************ 00:04:42.081 END TEST skip_rpc 00:04:42.081 22:34:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.081 22:34:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.081 ************************************ 00:04:42.081 22:34:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:42.081 22:34:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.081 22:34:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.081 22:34:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.081 22:34:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.081 ************************************ 00:04:42.081 START TEST skip_rpc_with_json 00:04:42.081 ************************************ 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58968 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58968 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 58968 ']' 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.081 22:34:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.081 [2024-07-15 22:34:57.421635] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:42.081 [2024-07-15 22:34:57.421742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:04:42.081 [2024-07-15 22:34:57.561870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.340 [2024-07-15 22:34:57.677541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.340 [2024-07-15 22:34:57.741493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.925 [2024-07-15 22:34:58.441864] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.925 request: 00:04:42.925 { 00:04:42.925 "trtype": "tcp", 00:04:42.925 "method": "nvmf_get_transports", 00:04:42.925 "req_id": 1 00:04:42.925 } 00:04:42.925 Got JSON-RPC error response 00:04:42.925 response: 00:04:42.925 { 00:04:42.925 "code": -19, 00:04:42.925 "message": "No such device" 00:04:42.925 } 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.925 [2024-07-15 22:34:58.449965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.925 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.182 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.182 22:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.182 { 00:04:43.182 "subsystems": [ 00:04:43.182 { 00:04:43.182 "subsystem": "keyring", 00:04:43.182 "config": [] 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "subsystem": "iobuf", 00:04:43.182 "config": [ 00:04:43.182 { 00:04:43.182 "method": "iobuf_set_options", 00:04:43.182 "params": { 00:04:43.182 "small_pool_count": 8192, 00:04:43.182 "large_pool_count": 1024, 00:04:43.182 "small_bufsize": 8192, 00:04:43.182 "large_bufsize": 135168 00:04:43.182 } 00:04:43.182 } 00:04:43.182 ] 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "subsystem": "sock", 00:04:43.182 "config": [ 00:04:43.182 { 00:04:43.182 "method": "sock_set_default_impl", 00:04:43.182 "params": { 00:04:43.182 "impl_name": "uring" 00:04:43.182 } 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "method": "sock_impl_set_options", 00:04:43.182 "params": { 00:04:43.182 "impl_name": "ssl", 00:04:43.182 "recv_buf_size": 4096, 00:04:43.182 "send_buf_size": 4096, 00:04:43.182 "enable_recv_pipe": true, 00:04:43.182 "enable_quickack": false, 00:04:43.182 "enable_placement_id": 0, 00:04:43.182 "enable_zerocopy_send_server": true, 00:04:43.182 "enable_zerocopy_send_client": false, 00:04:43.182 "zerocopy_threshold": 0, 00:04:43.182 "tls_version": 0, 00:04:43.182 "enable_ktls": false 00:04:43.182 } 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "method": "sock_impl_set_options", 00:04:43.182 "params": { 00:04:43.182 "impl_name": "posix", 00:04:43.182 "recv_buf_size": 2097152, 00:04:43.182 "send_buf_size": 2097152, 00:04:43.182 "enable_recv_pipe": true, 00:04:43.182 "enable_quickack": false, 00:04:43.182 "enable_placement_id": 0, 00:04:43.182 "enable_zerocopy_send_server": true, 00:04:43.182 "enable_zerocopy_send_client": false, 00:04:43.182 "zerocopy_threshold": 0, 00:04:43.182 "tls_version": 0, 00:04:43.182 "enable_ktls": false 00:04:43.182 } 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "method": "sock_impl_set_options", 00:04:43.182 "params": { 00:04:43.182 "impl_name": "uring", 00:04:43.182 "recv_buf_size": 2097152, 00:04:43.182 "send_buf_size": 2097152, 00:04:43.182 "enable_recv_pipe": true, 00:04:43.182 "enable_quickack": false, 00:04:43.182 "enable_placement_id": 0, 00:04:43.182 "enable_zerocopy_send_server": false, 00:04:43.182 "enable_zerocopy_send_client": false, 00:04:43.182 "zerocopy_threshold": 0, 00:04:43.182 "tls_version": 0, 00:04:43.182 "enable_ktls": false 00:04:43.182 } 00:04:43.182 } 00:04:43.182 ] 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "subsystem": "vmd", 00:04:43.182 "config": [] 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "subsystem": "accel", 00:04:43.182 "config": [ 00:04:43.182 { 00:04:43.182 "method": "accel_set_options", 00:04:43.182 "params": { 00:04:43.182 "small_cache_size": 128, 00:04:43.182 "large_cache_size": 16, 00:04:43.182 "task_count": 2048, 00:04:43.182 "sequence_count": 2048, 00:04:43.182 "buf_count": 2048 00:04:43.182 } 00:04:43.182 } 00:04:43.182 ] 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "subsystem": "bdev", 00:04:43.182 "config": [ 00:04:43.182 { 00:04:43.182 "method": "bdev_set_options", 00:04:43.182 "params": { 00:04:43.182 "bdev_io_pool_size": 65535, 00:04:43.182 "bdev_io_cache_size": 256, 00:04:43.182 "bdev_auto_examine": true, 00:04:43.182 "iobuf_small_cache_size": 128, 00:04:43.182 "iobuf_large_cache_size": 16 00:04:43.182 } 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "method": "bdev_raid_set_options", 00:04:43.182 "params": { 00:04:43.182 "process_window_size_kb": 1024 00:04:43.182 } 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "method": "bdev_iscsi_set_options", 00:04:43.182 "params": { 00:04:43.182 "timeout_sec": 30 00:04:43.182 } 00:04:43.182 }, 00:04:43.182 { 00:04:43.182 "method": "bdev_nvme_set_options", 00:04:43.182 "params": { 00:04:43.182 "action_on_timeout": "none", 00:04:43.182 "timeout_us": 0, 00:04:43.182 "timeout_admin_us": 0, 00:04:43.182 "keep_alive_timeout_ms": 10000, 00:04:43.182 "arbitration_burst": 0, 00:04:43.182 "low_priority_weight": 0, 00:04:43.182 "medium_priority_weight": 0, 00:04:43.183 "high_priority_weight": 0, 00:04:43.183 "nvme_adminq_poll_period_us": 10000, 00:04:43.183 "nvme_ioq_poll_period_us": 0, 00:04:43.183 "io_queue_requests": 0, 00:04:43.183 "delay_cmd_submit": true, 00:04:43.183 "transport_retry_count": 4, 00:04:43.183 "bdev_retry_count": 3, 00:04:43.183 "transport_ack_timeout": 0, 00:04:43.183 "ctrlr_loss_timeout_sec": 0, 00:04:43.183 "reconnect_delay_sec": 0, 00:04:43.183 "fast_io_fail_timeout_sec": 0, 00:04:43.183 "disable_auto_failback": false, 00:04:43.183 "generate_uuids": false, 00:04:43.183 "transport_tos": 0, 00:04:43.183 "nvme_error_stat": false, 00:04:43.183 "rdma_srq_size": 0, 00:04:43.183 "io_path_stat": false, 00:04:43.183 "allow_accel_sequence": false, 00:04:43.183 "rdma_max_cq_size": 0, 00:04:43.183 "rdma_cm_event_timeout_ms": 0, 00:04:43.183 "dhchap_digests": [ 00:04:43.183 "sha256", 00:04:43.183 "sha384", 00:04:43.183 "sha512" 00:04:43.183 ], 00:04:43.183 "dhchap_dhgroups": [ 00:04:43.183 "null", 00:04:43.183 "ffdhe2048", 00:04:43.183 "ffdhe3072", 00:04:43.183 "ffdhe4096", 00:04:43.183 "ffdhe6144", 00:04:43.183 "ffdhe8192" 00:04:43.183 ] 00:04:43.183 } 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "method": "bdev_nvme_set_hotplug", 00:04:43.183 "params": { 00:04:43.183 "period_us": 100000, 00:04:43.183 "enable": false 00:04:43.183 } 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "method": "bdev_wait_for_examine" 00:04:43.183 } 00:04:43.183 ] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "scsi", 00:04:43.183 "config": null 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "scheduler", 00:04:43.183 "config": [ 00:04:43.183 { 00:04:43.183 "method": "framework_set_scheduler", 00:04:43.183 "params": { 00:04:43.183 "name": "static" 00:04:43.183 } 00:04:43.183 } 00:04:43.183 ] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "vhost_scsi", 00:04:43.183 "config": [] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "vhost_blk", 00:04:43.183 "config": [] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "ublk", 00:04:43.183 "config": [] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "nbd", 00:04:43.183 "config": [] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "nvmf", 00:04:43.183 "config": [ 00:04:43.183 { 00:04:43.183 "method": "nvmf_set_config", 00:04:43.183 "params": { 00:04:43.183 "discovery_filter": "match_any", 00:04:43.183 "admin_cmd_passthru": { 00:04:43.183 "identify_ctrlr": false 00:04:43.183 } 00:04:43.183 } 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "method": "nvmf_set_max_subsystems", 00:04:43.183 "params": { 00:04:43.183 "max_subsystems": 1024 00:04:43.183 } 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "method": "nvmf_set_crdt", 00:04:43.183 "params": { 00:04:43.183 "crdt1": 0, 00:04:43.183 "crdt2": 0, 00:04:43.183 "crdt3": 0 00:04:43.183 } 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "method": "nvmf_create_transport", 00:04:43.183 "params": { 00:04:43.183 "trtype": "TCP", 00:04:43.183 "max_queue_depth": 128, 00:04:43.183 "max_io_qpairs_per_ctrlr": 127, 00:04:43.183 "in_capsule_data_size": 4096, 00:04:43.183 "max_io_size": 131072, 00:04:43.183 "io_unit_size": 131072, 00:04:43.183 "max_aq_depth": 128, 00:04:43.183 "num_shared_buffers": 511, 00:04:43.183 "buf_cache_size": 4294967295, 00:04:43.183 "dif_insert_or_strip": false, 00:04:43.183 "zcopy": false, 00:04:43.183 "c2h_success": true, 00:04:43.183 "sock_priority": 0, 00:04:43.183 "abort_timeout_sec": 1, 00:04:43.183 "ack_timeout": 0, 00:04:43.183 "data_wr_pool_size": 0 00:04:43.183 } 00:04:43.183 } 00:04:43.183 ] 00:04:43.183 }, 00:04:43.183 { 00:04:43.183 "subsystem": "iscsi", 00:04:43.183 "config": [ 00:04:43.183 { 00:04:43.183 "method": "iscsi_set_options", 00:04:43.183 "params": { 00:04:43.183 "node_base": "iqn.2016-06.io.spdk", 00:04:43.183 "max_sessions": 128, 00:04:43.183 "max_connections_per_session": 2, 00:04:43.183 "max_queue_depth": 64, 00:04:43.183 "default_time2wait": 2, 00:04:43.183 "default_time2retain": 20, 00:04:43.183 "first_burst_length": 8192, 00:04:43.183 "immediate_data": true, 00:04:43.183 "allow_duplicated_isid": false, 00:04:43.183 "error_recovery_level": 0, 00:04:43.183 "nop_timeout": 60, 00:04:43.183 "nop_in_interval": 30, 00:04:43.183 "disable_chap": false, 00:04:43.183 "require_chap": false, 00:04:43.183 "mutual_chap": false, 00:04:43.183 "chap_group": 0, 00:04:43.183 "max_large_datain_per_connection": 64, 00:04:43.183 "max_r2t_per_connection": 4, 00:04:43.183 "pdu_pool_size": 36864, 00:04:43.183 "immediate_data_pool_size": 16384, 00:04:43.183 "data_out_pool_size": 2048 00:04:43.183 } 00:04:43.183 } 00:04:43.183 ] 00:04:43.183 } 00:04:43.183 ] 00:04:43.183 } 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58968 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58968 ']' 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58968 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58968 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.183 killing process with pid 58968 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58968' 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58968 00:04:43.183 22:34:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58968 00:04:43.750 22:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58997 00:04:43.750 22:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.750 22:34:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58997 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58997 ']' 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58997 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58997 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.022 killing process with pid 58997 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58997' 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58997 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58997 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.022 00:04:49.022 real 0m7.177s 00:04:49.022 user 0m6.861s 00:04:49.022 sys 0m0.738s 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.022 ************************************ 00:04:49.022 END TEST skip_rpc_with_json 00:04:49.022 ************************************ 00:04:49.022 22:35:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.022 22:35:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.022 22:35:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.022 22:35:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.022 22:35:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.022 ************************************ 00:04:49.022 START TEST skip_rpc_with_delay 00:04:49.022 ************************************ 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.022 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.282 [2024-07-15 22:35:04.650105] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.282 [2024-07-15 22:35:04.650232] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:49.282 00:04:49.282 real 0m0.086s 00:04:49.282 user 0m0.057s 00:04:49.282 sys 0m0.027s 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.282 22:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.282 ************************************ 00:04:49.282 END TEST skip_rpc_with_delay 00:04:49.282 ************************************ 00:04:49.282 22:35:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.282 22:35:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.282 22:35:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.282 22:35:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.282 22:35:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.282 22:35:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.282 22:35:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.282 ************************************ 00:04:49.282 START TEST exit_on_failed_rpc_init 00:04:49.282 ************************************ 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59105 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59105 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59105 ']' 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.282 22:35:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.282 [2024-07-15 22:35:04.771017] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:49.282 [2024-07-15 22:35:04.771126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59105 ] 00:04:49.542 [2024-07-15 22:35:04.905664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.542 [2024-07-15 22:35:05.010085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.542 [2024-07-15 22:35:05.064505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.479 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:50.480 22:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.480 [2024-07-15 22:35:05.857388] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:50.480 [2024-07-15 22:35:05.857540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:04:50.480 [2024-07-15 22:35:06.000914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.739 [2024-07-15 22:35:06.103906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.739 [2024-07-15 22:35:06.104017] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.739 [2024-07-15 22:35:06.104032] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.739 [2024-07-15 22:35:06.104041] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59105 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59105 ']' 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59105 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59105 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.739 killing process with pid 59105 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59105' 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59105 00:04:50.739 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59105 00:04:51.307 00:04:51.307 real 0m1.915s 00:04:51.307 user 0m2.235s 00:04:51.307 sys 0m0.438s 00:04:51.307 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.307 22:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.307 ************************************ 00:04:51.307 END TEST exit_on_failed_rpc_init 00:04:51.307 ************************************ 00:04:51.307 22:35:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.307 22:35:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.307 00:04:51.307 real 0m14.940s 00:04:51.307 user 0m14.340s 00:04:51.307 sys 0m1.667s 00:04:51.307 22:35:06 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.307 22:35:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.307 ************************************ 00:04:51.307 END TEST skip_rpc 00:04:51.307 ************************************ 00:04:51.307 22:35:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.307 22:35:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.307 22:35:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.307 22:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.307 22:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.307 ************************************ 00:04:51.307 START TEST rpc_client 00:04:51.307 ************************************ 00:04:51.307 22:35:06 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.307 * Looking for test storage... 00:04:51.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:51.307 22:35:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:51.307 OK 00:04:51.307 22:35:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.307 00:04:51.307 real 0m0.101s 00:04:51.307 user 0m0.048s 00:04:51.307 sys 0m0.059s 00:04:51.307 22:35:06 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.307 22:35:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.307 ************************************ 00:04:51.307 END TEST rpc_client 00:04:51.307 ************************************ 00:04:51.307 22:35:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.307 22:35:06 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.307 22:35:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.307 22:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.307 22:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.567 ************************************ 00:04:51.567 START TEST json_config 00:04:51.567 ************************************ 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.567 22:35:06 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.567 22:35:06 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.567 22:35:06 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.567 22:35:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.567 22:35:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.567 22:35:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.567 22:35:06 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.567 22:35:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@47 -- # : 0 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.567 22:35:06 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.567 INFO: JSON configuration test init 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.567 22:35:06 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.567 22:35:06 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.567 22:35:06 json_config -- json_config/common.sh@10 -- # shift 00:04:51.567 22:35:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.567 22:35:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.567 22:35:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.567 22:35:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.567 22:35:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.567 22:35:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59241 00:04:51.567 Waiting for target to run... 00:04:51.567 22:35:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.567 22:35:06 json_config -- json_config/common.sh@25 -- # waitforlisten 59241 /var/tmp/spdk_tgt.sock 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@829 -- # '[' -z 59241 ']' 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.567 22:35:06 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.567 22:35:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.567 [2024-07-15 22:35:07.037783] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:51.568 [2024-07-15 22:35:07.037880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:04:52.135 [2024-07-15 22:35:07.473218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.135 [2024-07-15 22:35:07.554518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.704 22:35:08 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.704 22:35:08 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:52.704 00:04:52.704 22:35:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.704 22:35:08 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:52.704 22:35:08 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:52.704 22:35:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.704 22:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.704 22:35:08 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:52.704 22:35:08 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:52.704 22:35:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.704 22:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.704 22:35:08 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.704 22:35:08 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:52.704 22:35:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.963 [2024-07-15 22:35:08.335036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:53.222 22:35:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.222 22:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:53.222 22:35:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:53.222 22:35:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:53.481 22:35:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.481 22:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:53.481 22:35:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.481 22:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:53.481 22:35:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.481 22:35:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:53.740 MallocForNvmf0 00:04:53.740 22:35:09 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.740 22:35:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:53.999 MallocForNvmf1 00:04:53.999 22:35:09 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.999 22:35:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:54.258 [2024-07-15 22:35:09.589797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.258 22:35:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.258 22:35:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.258 22:35:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.258 22:35:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.517 22:35:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.517 22:35:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:54.776 22:35:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:54.776 22:35:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:55.035 [2024-07-15 22:35:10.450346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.035 22:35:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:55.035 22:35:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.035 22:35:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.035 22:35:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:55.035 22:35:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.035 22:35:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.035 22:35:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:55.035 22:35:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.035 22:35:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.294 MallocBdevForConfigChangeCheck 00:04:55.294 22:35:10 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:55.294 22:35:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.294 22:35:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.294 22:35:10 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:55.294 22:35:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.861 INFO: shutting down applications... 00:04:55.861 22:35:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:55.861 22:35:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:55.861 22:35:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:55.861 22:35:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:55.861 22:35:11 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.120 Calling clear_iscsi_subsystem 00:04:56.120 Calling clear_nvmf_subsystem 00:04:56.120 Calling clear_nbd_subsystem 00:04:56.120 Calling clear_ublk_subsystem 00:04:56.120 Calling clear_vhost_blk_subsystem 00:04:56.120 Calling clear_vhost_scsi_subsystem 00:04:56.120 Calling clear_bdev_subsystem 00:04:56.120 22:35:11 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:56.120 22:35:11 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:56.120 22:35:11 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:56.120 22:35:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.120 22:35:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.120 22:35:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.380 22:35:11 json_config -- json_config/json_config.sh@345 -- # break 00:04:56.380 22:35:11 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:56.380 22:35:11 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:56.380 22:35:11 json_config -- json_config/common.sh@31 -- # local app=target 00:04:56.380 22:35:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.380 22:35:11 json_config -- json_config/common.sh@35 -- # [[ -n 59241 ]] 00:04:56.380 22:35:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59241 00:04:56.380 22:35:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.380 22:35:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.380 22:35:11 json_config -- json_config/common.sh@41 -- # kill -0 59241 00:04:56.380 22:35:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.948 22:35:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.948 22:35:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.948 22:35:12 json_config -- json_config/common.sh@41 -- # kill -0 59241 00:04:56.948 22:35:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.948 22:35:12 json_config -- json_config/common.sh@43 -- # break 00:04:56.948 SPDK target shutdown done 00:04:56.948 22:35:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.948 22:35:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.948 INFO: relaunching applications... 00:04:56.948 22:35:12 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:56.948 22:35:12 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.948 22:35:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.948 22:35:12 json_config -- json_config/common.sh@10 -- # shift 00:04:56.948 22:35:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.948 22:35:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.948 22:35:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.948 22:35:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.948 22:35:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.948 22:35:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59432 00:04:56.948 22:35:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:56.948 Waiting for target to run... 00:04:56.948 22:35:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.948 22:35:12 json_config -- json_config/common.sh@25 -- # waitforlisten 59432 /var/tmp/spdk_tgt.sock 00:04:56.948 22:35:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 59432 ']' 00:04:56.948 22:35:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.948 22:35:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.948 22:35:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.948 22:35:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.948 22:35:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.948 [2024-07-15 22:35:12.497864] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:56.948 [2024-07-15 22:35:12.497970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:04:57.516 [2024-07-15 22:35:12.939148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.516 [2024-07-15 22:35:13.039585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.775 [2024-07-15 22:35:13.166643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.033 [2024-07-15 22:35:13.376891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.033 [2024-07-15 22:35:13.408983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.033 22:35:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.033 00:04:58.033 22:35:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:58.033 22:35:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.033 22:35:13 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:58.033 INFO: Checking if target configuration is the same... 00:04:58.033 22:35:13 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:58.033 22:35:13 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.033 22:35:13 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:58.033 22:35:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.033 + '[' 2 -ne 2 ']' 00:04:58.034 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:58.034 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:58.034 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:58.034 +++ basename /dev/fd/62 00:04:58.034 ++ mktemp /tmp/62.XXX 00:04:58.034 + tmp_file_1=/tmp/62.3Oz 00:04:58.034 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.034 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.034 + tmp_file_2=/tmp/spdk_tgt_config.json.3yA 00:04:58.034 + ret=0 00:04:58.034 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:58.292 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:58.551 + diff -u /tmp/62.3Oz /tmp/spdk_tgt_config.json.3yA 00:04:58.551 INFO: JSON config files are the same 00:04:58.551 + echo 'INFO: JSON config files are the same' 00:04:58.551 + rm /tmp/62.3Oz /tmp/spdk_tgt_config.json.3yA 00:04:58.551 + exit 0 00:04:58.551 INFO: changing configuration and checking if this can be detected... 00:04:58.551 22:35:13 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:58.551 22:35:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:58.551 22:35:13 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.551 22:35:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.810 22:35:14 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.810 22:35:14 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:58.810 22:35:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.810 + '[' 2 -ne 2 ']' 00:04:58.810 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:58.810 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:58.810 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:58.810 +++ basename /dev/fd/62 00:04:58.810 ++ mktemp /tmp/62.XXX 00:04:58.810 + tmp_file_1=/tmp/62.qXQ 00:04:58.810 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.810 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.810 + tmp_file_2=/tmp/spdk_tgt_config.json.FqW 00:04:58.810 + ret=0 00:04:58.810 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:59.100 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:59.100 + diff -u /tmp/62.qXQ /tmp/spdk_tgt_config.json.FqW 00:04:59.100 + ret=1 00:04:59.100 + echo '=== Start of file: /tmp/62.qXQ ===' 00:04:59.100 + cat /tmp/62.qXQ 00:04:59.100 + echo '=== End of file: /tmp/62.qXQ ===' 00:04:59.100 + echo '' 00:04:59.100 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FqW ===' 00:04:59.100 + cat /tmp/spdk_tgt_config.json.FqW 00:04:59.100 + echo '=== End of file: /tmp/spdk_tgt_config.json.FqW ===' 00:04:59.100 + echo '' 00:04:59.100 + rm /tmp/62.qXQ /tmp/spdk_tgt_config.json.FqW 00:04:59.100 + exit 1 00:04:59.100 INFO: configuration change detected. 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:59.100 22:35:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.100 22:35:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@317 -- # [[ -n 59432 ]] 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:59.100 22:35:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.100 22:35:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:59.100 22:35:14 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:59.100 22:35:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.100 22:35:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.384 22:35:14 json_config -- json_config/json_config.sh@323 -- # killprocess 59432 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@948 -- # '[' -z 59432 ']' 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@952 -- # kill -0 59432 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@953 -- # uname 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59432 00:04:59.384 killing process with pid 59432 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59432' 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@967 -- # kill 59432 00:04:59.384 22:35:14 json_config -- common/autotest_common.sh@972 -- # wait 59432 00:04:59.643 22:35:14 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:59.643 22:35:14 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:59.643 22:35:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.643 22:35:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.643 22:35:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:59.643 INFO: Success 00:04:59.643 22:35:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:59.643 ************************************ 00:04:59.643 END TEST json_config 00:04:59.643 ************************************ 00:04:59.643 00:04:59.643 real 0m8.156s 00:04:59.643 user 0m11.536s 00:04:59.643 sys 0m1.722s 00:04:59.643 22:35:15 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.643 22:35:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.643 22:35:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.643 22:35:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.643 22:35:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.643 22:35:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.643 22:35:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.643 ************************************ 00:04:59.643 START TEST json_config_extra_key 00:04:59.643 ************************************ 00:04:59.643 22:35:15 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.643 22:35:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.643 22:35:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.643 22:35:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.643 22:35:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.643 22:35:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.643 22:35:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.643 22:35:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.643 22:35:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.643 22:35:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:59.643 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.644 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.644 INFO: launching applications... 00:04:59.644 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.644 22:35:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59572 00:04:59.644 Waiting for target to run... 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59572 /var/tmp/spdk_tgt.sock 00:04:59.644 22:35:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.644 22:35:15 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59572 ']' 00:04:59.644 22:35:15 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.644 22:35:15 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.644 22:35:15 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.644 22:35:15 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.644 22:35:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.902 [2024-07-15 22:35:15.224086] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:04:59.902 [2024-07-15 22:35:15.224171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59572 ] 00:05:00.161 [2024-07-15 22:35:15.672810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.418 [2024-07-15 22:35:15.774488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.418 [2024-07-15 22:35:15.796100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.677 22:35:16 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.677 00:05:00.677 22:35:16 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.677 INFO: shutting down applications... 00:05:00.677 22:35:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.677 22:35:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59572 ]] 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59572 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59572 00:05:00.677 22:35:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.245 22:35:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.245 22:35:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.245 22:35:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59572 00:05:01.245 22:35:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59572 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.811 SPDK target shutdown done 00:05:01.811 22:35:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.811 Success 00:05:01.811 22:35:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:01.811 00:05:01.811 real 0m2.125s 00:05:01.811 user 0m1.550s 00:05:01.811 sys 0m0.472s 00:05:01.811 22:35:17 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.811 ************************************ 00:05:01.811 END TEST json_config_extra_key 00:05:01.811 ************************************ 00:05:01.811 22:35:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.811 22:35:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.811 22:35:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.811 22:35:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.811 22:35:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.811 22:35:17 -- common/autotest_common.sh@10 -- # set +x 00:05:01.811 ************************************ 00:05:01.811 START TEST alias_rpc 00:05:01.811 ************************************ 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.811 * Looking for test storage... 00:05:01.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:01.811 22:35:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.811 22:35:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59643 00:05:01.811 22:35:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.811 22:35:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59643 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59643 ']' 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.811 22:35:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.069 [2024-07-15 22:35:17.413502] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:02.069 [2024-07-15 22:35:17.413643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:05:02.070 [2024-07-15 22:35:17.551382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.328 [2024-07-15 22:35:17.677188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.328 [2024-07-15 22:35:17.740245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.894 22:35:18 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.894 22:35:18 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:02.894 22:35:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:03.153 22:35:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59643 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59643 ']' 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59643 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59643 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.153 killing process with pid 59643 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59643' 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@967 -- # kill 59643 00:05:03.153 22:35:18 alias_rpc -- common/autotest_common.sh@972 -- # wait 59643 00:05:03.721 00:05:03.721 real 0m1.897s 00:05:03.721 user 0m2.118s 00:05:03.721 sys 0m0.473s 00:05:03.721 22:35:19 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.721 22:35:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.721 ************************************ 00:05:03.721 END TEST alias_rpc 00:05:03.721 ************************************ 00:05:03.721 22:35:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.721 22:35:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:03.721 22:35:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:03.721 22:35:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.721 22:35:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.721 22:35:19 -- common/autotest_common.sh@10 -- # set +x 00:05:03.721 ************************************ 00:05:03.721 START TEST spdkcli_tcp 00:05:03.721 ************************************ 00:05:03.721 22:35:19 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:03.721 * Looking for test storage... 00:05:03.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:03.721 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:03.721 22:35:19 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.721 22:35:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59719 00:05:03.979 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:03.979 22:35:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59719 00:05:03.979 22:35:19 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59719 ']' 00:05:03.979 22:35:19 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.979 22:35:19 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.979 22:35:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.979 22:35:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.979 22:35:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 [2024-07-15 22:35:19.351036] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:03.979 [2024-07-15 22:35:19.351148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:05:03.979 [2024-07-15 22:35:19.490839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.237 [2024-07-15 22:35:19.615722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.237 [2024-07-15 22:35:19.615732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.237 [2024-07-15 22:35:19.668851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:04.801 22:35:20 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.801 22:35:20 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:04.801 22:35:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59736 00:05:04.801 22:35:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:04.801 22:35:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.367 [ 00:05:05.367 "bdev_malloc_delete", 00:05:05.367 "bdev_malloc_create", 00:05:05.367 "bdev_null_resize", 00:05:05.367 "bdev_null_delete", 00:05:05.367 "bdev_null_create", 00:05:05.367 "bdev_nvme_cuse_unregister", 00:05:05.367 "bdev_nvme_cuse_register", 00:05:05.367 "bdev_opal_new_user", 00:05:05.367 "bdev_opal_set_lock_state", 00:05:05.367 "bdev_opal_delete", 00:05:05.367 "bdev_opal_get_info", 00:05:05.367 "bdev_opal_create", 00:05:05.367 "bdev_nvme_opal_revert", 00:05:05.367 "bdev_nvme_opal_init", 00:05:05.367 "bdev_nvme_send_cmd", 00:05:05.367 "bdev_nvme_get_path_iostat", 00:05:05.367 "bdev_nvme_get_mdns_discovery_info", 00:05:05.367 "bdev_nvme_stop_mdns_discovery", 00:05:05.367 "bdev_nvme_start_mdns_discovery", 00:05:05.367 "bdev_nvme_set_multipath_policy", 00:05:05.367 "bdev_nvme_set_preferred_path", 00:05:05.367 "bdev_nvme_get_io_paths", 00:05:05.367 "bdev_nvme_remove_error_injection", 00:05:05.367 "bdev_nvme_add_error_injection", 00:05:05.367 "bdev_nvme_get_discovery_info", 00:05:05.367 "bdev_nvme_stop_discovery", 00:05:05.367 "bdev_nvme_start_discovery", 00:05:05.367 "bdev_nvme_get_controller_health_info", 00:05:05.367 "bdev_nvme_disable_controller", 00:05:05.367 "bdev_nvme_enable_controller", 00:05:05.367 "bdev_nvme_reset_controller", 00:05:05.367 "bdev_nvme_get_transport_statistics", 00:05:05.367 "bdev_nvme_apply_firmware", 00:05:05.367 "bdev_nvme_detach_controller", 00:05:05.367 "bdev_nvme_get_controllers", 00:05:05.367 "bdev_nvme_attach_controller", 00:05:05.367 "bdev_nvme_set_hotplug", 00:05:05.367 "bdev_nvme_set_options", 00:05:05.367 "bdev_passthru_delete", 00:05:05.367 "bdev_passthru_create", 00:05:05.367 "bdev_lvol_set_parent_bdev", 00:05:05.367 "bdev_lvol_set_parent", 00:05:05.367 "bdev_lvol_check_shallow_copy", 00:05:05.367 "bdev_lvol_start_shallow_copy", 00:05:05.367 "bdev_lvol_grow_lvstore", 00:05:05.367 "bdev_lvol_get_lvols", 00:05:05.367 "bdev_lvol_get_lvstores", 00:05:05.367 "bdev_lvol_delete", 00:05:05.367 "bdev_lvol_set_read_only", 00:05:05.367 "bdev_lvol_resize", 00:05:05.367 "bdev_lvol_decouple_parent", 00:05:05.367 "bdev_lvol_inflate", 00:05:05.367 "bdev_lvol_rename", 00:05:05.367 "bdev_lvol_clone_bdev", 00:05:05.367 "bdev_lvol_clone", 00:05:05.367 "bdev_lvol_snapshot", 00:05:05.367 "bdev_lvol_create", 00:05:05.367 "bdev_lvol_delete_lvstore", 00:05:05.367 "bdev_lvol_rename_lvstore", 00:05:05.367 "bdev_lvol_create_lvstore", 00:05:05.367 "bdev_raid_set_options", 00:05:05.367 "bdev_raid_remove_base_bdev", 00:05:05.367 "bdev_raid_add_base_bdev", 00:05:05.367 "bdev_raid_delete", 00:05:05.367 "bdev_raid_create", 00:05:05.367 "bdev_raid_get_bdevs", 00:05:05.367 "bdev_error_inject_error", 00:05:05.367 "bdev_error_delete", 00:05:05.367 "bdev_error_create", 00:05:05.367 "bdev_split_delete", 00:05:05.367 "bdev_split_create", 00:05:05.367 "bdev_delay_delete", 00:05:05.367 "bdev_delay_create", 00:05:05.367 "bdev_delay_update_latency", 00:05:05.367 "bdev_zone_block_delete", 00:05:05.367 "bdev_zone_block_create", 00:05:05.367 "blobfs_create", 00:05:05.367 "blobfs_detect", 00:05:05.367 "blobfs_set_cache_size", 00:05:05.367 "bdev_aio_delete", 00:05:05.367 "bdev_aio_rescan", 00:05:05.367 "bdev_aio_create", 00:05:05.367 "bdev_ftl_set_property", 00:05:05.367 "bdev_ftl_get_properties", 00:05:05.367 "bdev_ftl_get_stats", 00:05:05.367 "bdev_ftl_unmap", 00:05:05.367 "bdev_ftl_unload", 00:05:05.367 "bdev_ftl_delete", 00:05:05.367 "bdev_ftl_load", 00:05:05.367 "bdev_ftl_create", 00:05:05.367 "bdev_virtio_attach_controller", 00:05:05.367 "bdev_virtio_scsi_get_devices", 00:05:05.367 "bdev_virtio_detach_controller", 00:05:05.367 "bdev_virtio_blk_set_hotplug", 00:05:05.367 "bdev_iscsi_delete", 00:05:05.367 "bdev_iscsi_create", 00:05:05.367 "bdev_iscsi_set_options", 00:05:05.367 "bdev_uring_delete", 00:05:05.367 "bdev_uring_rescan", 00:05:05.367 "bdev_uring_create", 00:05:05.367 "accel_error_inject_error", 00:05:05.367 "ioat_scan_accel_module", 00:05:05.367 "dsa_scan_accel_module", 00:05:05.367 "iaa_scan_accel_module", 00:05:05.367 "keyring_file_remove_key", 00:05:05.367 "keyring_file_add_key", 00:05:05.367 "keyring_linux_set_options", 00:05:05.367 "iscsi_get_histogram", 00:05:05.367 "iscsi_enable_histogram", 00:05:05.367 "iscsi_set_options", 00:05:05.367 "iscsi_get_auth_groups", 00:05:05.367 "iscsi_auth_group_remove_secret", 00:05:05.367 "iscsi_auth_group_add_secret", 00:05:05.367 "iscsi_delete_auth_group", 00:05:05.367 "iscsi_create_auth_group", 00:05:05.367 "iscsi_set_discovery_auth", 00:05:05.367 "iscsi_get_options", 00:05:05.367 "iscsi_target_node_request_logout", 00:05:05.367 "iscsi_target_node_set_redirect", 00:05:05.367 "iscsi_target_node_set_auth", 00:05:05.367 "iscsi_target_node_add_lun", 00:05:05.367 "iscsi_get_stats", 00:05:05.367 "iscsi_get_connections", 00:05:05.367 "iscsi_portal_group_set_auth", 00:05:05.367 "iscsi_start_portal_group", 00:05:05.367 "iscsi_delete_portal_group", 00:05:05.367 "iscsi_create_portal_group", 00:05:05.367 "iscsi_get_portal_groups", 00:05:05.367 "iscsi_delete_target_node", 00:05:05.367 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.367 "iscsi_target_node_add_pg_ig_maps", 00:05:05.367 "iscsi_create_target_node", 00:05:05.367 "iscsi_get_target_nodes", 00:05:05.367 "iscsi_delete_initiator_group", 00:05:05.367 "iscsi_initiator_group_remove_initiators", 00:05:05.367 "iscsi_initiator_group_add_initiators", 00:05:05.367 "iscsi_create_initiator_group", 00:05:05.367 "iscsi_get_initiator_groups", 00:05:05.367 "nvmf_set_crdt", 00:05:05.367 "nvmf_set_config", 00:05:05.367 "nvmf_set_max_subsystems", 00:05:05.367 "nvmf_stop_mdns_prr", 00:05:05.367 "nvmf_publish_mdns_prr", 00:05:05.367 "nvmf_subsystem_get_listeners", 00:05:05.367 "nvmf_subsystem_get_qpairs", 00:05:05.367 "nvmf_subsystem_get_controllers", 00:05:05.367 "nvmf_get_stats", 00:05:05.367 "nvmf_get_transports", 00:05:05.367 "nvmf_create_transport", 00:05:05.367 "nvmf_get_targets", 00:05:05.367 "nvmf_delete_target", 00:05:05.367 "nvmf_create_target", 00:05:05.367 "nvmf_subsystem_allow_any_host", 00:05:05.367 "nvmf_subsystem_remove_host", 00:05:05.367 "nvmf_subsystem_add_host", 00:05:05.367 "nvmf_ns_remove_host", 00:05:05.367 "nvmf_ns_add_host", 00:05:05.367 "nvmf_subsystem_remove_ns", 00:05:05.368 "nvmf_subsystem_add_ns", 00:05:05.368 "nvmf_subsystem_listener_set_ana_state", 00:05:05.368 "nvmf_discovery_get_referrals", 00:05:05.368 "nvmf_discovery_remove_referral", 00:05:05.368 "nvmf_discovery_add_referral", 00:05:05.368 "nvmf_subsystem_remove_listener", 00:05:05.368 "nvmf_subsystem_add_listener", 00:05:05.368 "nvmf_delete_subsystem", 00:05:05.368 "nvmf_create_subsystem", 00:05:05.368 "nvmf_get_subsystems", 00:05:05.368 "env_dpdk_get_mem_stats", 00:05:05.368 "nbd_get_disks", 00:05:05.368 "nbd_stop_disk", 00:05:05.368 "nbd_start_disk", 00:05:05.368 "ublk_recover_disk", 00:05:05.368 "ublk_get_disks", 00:05:05.368 "ublk_stop_disk", 00:05:05.368 "ublk_start_disk", 00:05:05.368 "ublk_destroy_target", 00:05:05.368 "ublk_create_target", 00:05:05.368 "virtio_blk_create_transport", 00:05:05.368 "virtio_blk_get_transports", 00:05:05.368 "vhost_controller_set_coalescing", 00:05:05.368 "vhost_get_controllers", 00:05:05.368 "vhost_delete_controller", 00:05:05.368 "vhost_create_blk_controller", 00:05:05.368 "vhost_scsi_controller_remove_target", 00:05:05.368 "vhost_scsi_controller_add_target", 00:05:05.368 "vhost_start_scsi_controller", 00:05:05.368 "vhost_create_scsi_controller", 00:05:05.368 "thread_set_cpumask", 00:05:05.368 "framework_get_governor", 00:05:05.368 "framework_get_scheduler", 00:05:05.368 "framework_set_scheduler", 00:05:05.368 "framework_get_reactors", 00:05:05.368 "thread_get_io_channels", 00:05:05.368 "thread_get_pollers", 00:05:05.368 "thread_get_stats", 00:05:05.368 "framework_monitor_context_switch", 00:05:05.368 "spdk_kill_instance", 00:05:05.368 "log_enable_timestamps", 00:05:05.368 "log_get_flags", 00:05:05.368 "log_clear_flag", 00:05:05.368 "log_set_flag", 00:05:05.368 "log_get_level", 00:05:05.368 "log_set_level", 00:05:05.368 "log_get_print_level", 00:05:05.368 "log_set_print_level", 00:05:05.368 "framework_enable_cpumask_locks", 00:05:05.368 "framework_disable_cpumask_locks", 00:05:05.368 "framework_wait_init", 00:05:05.368 "framework_start_init", 00:05:05.368 "scsi_get_devices", 00:05:05.368 "bdev_get_histogram", 00:05:05.368 "bdev_enable_histogram", 00:05:05.368 "bdev_set_qos_limit", 00:05:05.368 "bdev_set_qd_sampling_period", 00:05:05.368 "bdev_get_bdevs", 00:05:05.368 "bdev_reset_iostat", 00:05:05.368 "bdev_get_iostat", 00:05:05.368 "bdev_examine", 00:05:05.368 "bdev_wait_for_examine", 00:05:05.368 "bdev_set_options", 00:05:05.368 "notify_get_notifications", 00:05:05.368 "notify_get_types", 00:05:05.368 "accel_get_stats", 00:05:05.368 "accel_set_options", 00:05:05.368 "accel_set_driver", 00:05:05.368 "accel_crypto_key_destroy", 00:05:05.368 "accel_crypto_keys_get", 00:05:05.368 "accel_crypto_key_create", 00:05:05.368 "accel_assign_opc", 00:05:05.368 "accel_get_module_info", 00:05:05.368 "accel_get_opc_assignments", 00:05:05.368 "vmd_rescan", 00:05:05.368 "vmd_remove_device", 00:05:05.368 "vmd_enable", 00:05:05.368 "sock_get_default_impl", 00:05:05.368 "sock_set_default_impl", 00:05:05.368 "sock_impl_set_options", 00:05:05.368 "sock_impl_get_options", 00:05:05.368 "iobuf_get_stats", 00:05:05.368 "iobuf_set_options", 00:05:05.368 "framework_get_pci_devices", 00:05:05.368 "framework_get_config", 00:05:05.368 "framework_get_subsystems", 00:05:05.368 "trace_get_info", 00:05:05.368 "trace_get_tpoint_group_mask", 00:05:05.368 "trace_disable_tpoint_group", 00:05:05.368 "trace_enable_tpoint_group", 00:05:05.368 "trace_clear_tpoint_mask", 00:05:05.368 "trace_set_tpoint_mask", 00:05:05.368 "keyring_get_keys", 00:05:05.368 "spdk_get_version", 00:05:05.368 "rpc_get_methods" 00:05:05.368 ] 00:05:05.368 22:35:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.368 22:35:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.368 22:35:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59719 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59719 ']' 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59719 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59719 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.368 killing process with pid 59719 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59719' 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59719 00:05:05.368 22:35:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59719 00:05:05.685 00:05:05.685 real 0m1.928s 00:05:05.685 user 0m3.641s 00:05:05.685 sys 0m0.488s 00:05:05.685 22:35:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.685 ************************************ 00:05:05.685 END TEST spdkcli_tcp 00:05:05.685 ************************************ 00:05:05.685 22:35:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.685 22:35:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.685 22:35:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.685 22:35:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.685 22:35:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.685 22:35:21 -- common/autotest_common.sh@10 -- # set +x 00:05:05.685 ************************************ 00:05:05.685 START TEST dpdk_mem_utility 00:05:05.685 ************************************ 00:05:05.685 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.942 * Looking for test storage... 00:05:05.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:05.942 22:35:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.942 22:35:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59810 00:05:05.942 22:35:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.942 22:35:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59810 00:05:05.942 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59810 ']' 00:05:05.942 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.942 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.942 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.942 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.942 22:35:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.942 [2024-07-15 22:35:21.324363] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:05.942 [2024-07-15 22:35:21.324460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:05:05.942 [2024-07-15 22:35:21.457660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.200 [2024-07-15 22:35:21.577080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.200 [2024-07-15 22:35:21.641172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.778 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.778 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:06.778 22:35:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:06.778 22:35:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:06.778 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.778 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.778 { 00:05:06.778 "filename": "/tmp/spdk_mem_dump.txt" 00:05:06.778 } 00:05:06.778 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.778 22:35:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.036 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:07.036 1 heaps totaling size 814.000000 MiB 00:05:07.036 size: 814.000000 MiB heap id: 0 00:05:07.036 end heaps---------- 00:05:07.036 8 mempools totaling size 598.116089 MiB 00:05:07.036 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.036 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.036 size: 84.521057 MiB name: bdev_io_59810 00:05:07.036 size: 51.011292 MiB name: evtpool_59810 00:05:07.036 size: 50.003479 MiB name: msgpool_59810 00:05:07.036 size: 21.763794 MiB name: PDU_Pool 00:05:07.036 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.036 size: 0.026123 MiB name: Session_Pool 00:05:07.036 end mempools------- 00:05:07.036 6 memzones totaling size 4.142822 MiB 00:05:07.036 size: 1.000366 MiB name: RG_ring_0_59810 00:05:07.036 size: 1.000366 MiB name: RG_ring_1_59810 00:05:07.036 size: 1.000366 MiB name: RG_ring_4_59810 00:05:07.036 size: 1.000366 MiB name: RG_ring_5_59810 00:05:07.036 size: 0.125366 MiB name: RG_ring_2_59810 00:05:07.036 size: 0.015991 MiB name: RG_ring_3_59810 00:05:07.036 end memzones------- 00:05:07.036 22:35:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.036 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:05:07.036 list of free elements. size: 12.471558 MiB 00:05:07.036 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:07.036 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:07.036 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:07.036 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:07.036 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:07.036 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:07.036 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:07.036 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:07.036 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:07.036 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:05:07.036 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:07.036 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:07.036 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:07.036 element at address: 0x200027e00000 with size: 0.396301 MiB 00:05:07.036 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:07.036 list of standard malloc elements. size: 199.265869 MiB 00:05:07.036 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:07.036 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:07.036 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:07.036 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:07.036 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:07.036 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.036 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:07.036 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.036 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:07.036 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:07.036 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:07.036 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:07.036 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:07.036 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:07.036 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:07.036 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:07.037 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:07.038 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e65740 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e65800 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c400 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:07.038 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:07.038 list of memzone associated elements. size: 602.262573 MiB 00:05:07.038 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:07.038 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.038 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:07.038 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.038 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:07.038 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59810_0 00:05:07.038 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:07.038 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59810_0 00:05:07.038 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:07.038 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59810_0 00:05:07.038 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:07.038 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.038 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:07.038 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.038 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:07.038 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59810 00:05:07.038 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:07.038 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59810 00:05:07.038 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.038 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59810 00:05:07.038 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:07.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.038 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:07.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.038 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:07.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.038 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:07.038 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.038 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:07.038 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59810 00:05:07.038 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:07.038 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59810 00:05:07.038 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:07.038 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59810 00:05:07.038 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:07.038 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59810 00:05:07.039 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:07.039 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59810 00:05:07.039 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:07.039 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.039 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:07.039 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.039 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:07.039 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.039 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:07.039 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59810 00:05:07.039 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:07.039 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.039 element at address: 0x200027e658c0 with size: 0.023743 MiB 00:05:07.039 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.039 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:07.039 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59810 00:05:07.039 element at address: 0x200027e6ba00 with size: 0.002441 MiB 00:05:07.039 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.039 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:07.039 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59810 00:05:07.039 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:07.039 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59810 00:05:07.039 element at address: 0x200027e6c4c0 with size: 0.000305 MiB 00:05:07.039 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.039 22:35:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.039 22:35:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59810 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59810 ']' 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59810 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59810 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.039 killing process with pid 59810 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59810' 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59810 00:05:07.039 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59810 00:05:07.605 00:05:07.605 real 0m1.719s 00:05:07.605 user 0m1.858s 00:05:07.605 sys 0m0.453s 00:05:07.605 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.605 ************************************ 00:05:07.605 END TEST dpdk_mem_utility 00:05:07.605 ************************************ 00:05:07.605 22:35:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 22:35:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.605 22:35:22 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.605 22:35:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.605 22:35:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.605 22:35:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 START TEST event 00:05:07.605 ************************************ 00:05:07.605 22:35:22 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.605 * Looking for test storage... 00:05:07.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.605 22:35:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:07.605 22:35:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.605 22:35:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.605 22:35:23 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:07.605 22:35:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.605 22:35:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 START TEST event_perf 00:05:07.605 ************************************ 00:05:07.605 22:35:23 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.605 Running I/O for 1 seconds...[2024-07-15 22:35:23.068279] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:07.605 [2024-07-15 22:35:23.068889] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:05:07.863 [2024-07-15 22:35:23.205892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.863 [2024-07-15 22:35:23.302365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.863 [2024-07-15 22:35:23.302665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.863 [2024-07-15 22:35:23.302519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.863 Running I/O for 1 seconds...[2024-07-15 22:35:23.302644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.242 00:05:09.242 lcore 0: 202077 00:05:09.242 lcore 1: 202075 00:05:09.242 lcore 2: 202074 00:05:09.242 lcore 3: 202075 00:05:09.242 done. 00:05:09.242 00:05:09.242 real 0m1.336s 00:05:09.242 user 0m4.136s 00:05:09.242 sys 0m0.076s 00:05:09.242 22:35:24 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.242 22:35:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.242 ************************************ 00:05:09.242 END TEST event_perf 00:05:09.242 ************************************ 00:05:09.242 22:35:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:09.242 22:35:24 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.242 22:35:24 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:09.242 22:35:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.242 22:35:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.242 ************************************ 00:05:09.242 START TEST event_reactor 00:05:09.242 ************************************ 00:05:09.242 22:35:24 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.242 [2024-07-15 22:35:24.459583] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:09.242 [2024-07-15 22:35:24.459677] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:05:09.242 [2024-07-15 22:35:24.597467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.242 [2024-07-15 22:35:24.693337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.618 test_start 00:05:10.618 oneshot 00:05:10.618 tick 100 00:05:10.618 tick 100 00:05:10.618 tick 250 00:05:10.618 tick 100 00:05:10.618 tick 100 00:05:10.618 tick 100 00:05:10.618 tick 250 00:05:10.618 tick 500 00:05:10.618 tick 100 00:05:10.618 tick 100 00:05:10.618 tick 250 00:05:10.618 tick 100 00:05:10.618 tick 100 00:05:10.618 test_end 00:05:10.618 00:05:10.618 real 0m1.337s 00:05:10.618 user 0m1.172s 00:05:10.618 sys 0m0.059s 00:05:10.618 ************************************ 00:05:10.618 END TEST event_reactor 00:05:10.618 ************************************ 00:05:10.618 22:35:25 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.618 22:35:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.618 22:35:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.618 22:35:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.618 22:35:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:10.618 22:35:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.618 22:35:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.618 ************************************ 00:05:10.618 START TEST event_reactor_perf 00:05:10.618 ************************************ 00:05:10.618 22:35:25 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.618 [2024-07-15 22:35:25.844499] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:10.618 [2024-07-15 22:35:25.844780] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:05:10.618 [2024-07-15 22:35:25.982685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.618 [2024-07-15 22:35:26.105987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.992 test_start 00:05:11.992 test_end 00:05:11.992 Performance: 390823 events per second 00:05:11.992 ************************************ 00:05:11.992 END TEST event_reactor_perf 00:05:11.992 00:05:11.992 real 0m1.358s 00:05:11.992 user 0m1.198s 00:05:11.992 sys 0m0.054s 00:05:11.992 22:35:27 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.992 22:35:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.992 ************************************ 00:05:11.992 22:35:27 event -- common/autotest_common.sh@1142 -- # return 0 00:05:11.992 22:35:27 event -- event/event.sh@49 -- # uname -s 00:05:11.993 22:35:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.993 22:35:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.993 22:35:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.993 22:35:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.993 22:35:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.993 ************************************ 00:05:11.993 START TEST event_scheduler 00:05:11.993 ************************************ 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.993 * Looking for test storage... 00:05:11.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:11.993 22:35:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:11.993 22:35:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60017 00:05:11.993 22:35:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.993 22:35:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:11.993 22:35:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60017 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60017 ']' 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.993 22:35:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.993 [2024-07-15 22:35:27.373193] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:11.993 [2024-07-15 22:35:27.373301] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:05:11.993 [2024-07-15 22:35:27.511685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.251 [2024-07-15 22:35:27.621061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.251 [2024-07-15 22:35:27.621137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.251 [2024-07-15 22:35:27.621268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.251 [2024-07-15 22:35:27.621285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.818 22:35:28 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.818 22:35:28 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:12.818 22:35:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.818 22:35:28 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.818 22:35:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.818 POWER: Env isn't set yet! 00:05:12.818 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:12.819 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.819 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.819 POWER: Attempting to initialise PSTAT power management... 00:05:12.819 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.819 POWER: Cannot set governor of lcore 0 to performance 00:05:12.819 POWER: Attempting to initialise AMD PSTATE power management... 00:05:12.819 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.819 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.819 POWER: Attempting to initialise CPPC power management... 00:05:12.819 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.819 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.819 POWER: Attempting to initialise VM power management... 00:05:12.819 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:12.819 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:12.819 POWER: Unable to set Power Management Environment for lcore 0 00:05:12.819 [2024-07-15 22:35:28.335903] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:12.819 [2024-07-15 22:35:28.335917] dpdk_governor.c: 160:_init: *ERROR*: Failed to initialize on core0 00:05:12.819 [2024-07-15 22:35:28.335926] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.819 [2024-07-15 22:35:28.335938] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.819 [2024-07-15 22:35:28.335945] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.819 [2024-07-15 22:35:28.335953] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.819 22:35:28 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.819 22:35:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.819 22:35:28 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.819 22:35:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 [2024-07-15 22:35:28.401069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.078 [2024-07-15 22:35:28.432791] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.078 22:35:28 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.078 22:35:28 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.078 22:35:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 ************************************ 00:05:13.078 START TEST scheduler_create_thread 00:05:13.078 ************************************ 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 2 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 3 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 4 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 5 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 6 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 7 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 8 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 9 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 10 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.078 22:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 22:35:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.456 22:35:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.456 22:35:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.456 22:35:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.456 22:35:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.833 ************************************ 00:05:15.833 END TEST scheduler_create_thread 00:05:15.833 ************************************ 00:05:15.833 22:35:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.833 00:05:15.833 real 0m2.616s 00:05:15.833 user 0m0.019s 00:05:15.833 sys 0m0.005s 00:05:15.833 22:35:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.833 22:35:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:15.833 22:35:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.833 22:35:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60017 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60017 ']' 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60017 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60017 00:05:15.833 killing process with pid 60017 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60017' 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60017 00:05:15.833 22:35:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60017 00:05:16.091 [2024-07-15 22:35:31.541152] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.349 ************************************ 00:05:16.349 END TEST event_scheduler 00:05:16.349 ************************************ 00:05:16.349 00:05:16.349 real 0m4.548s 00:05:16.349 user 0m8.537s 00:05:16.349 sys 0m0.356s 00:05:16.349 22:35:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.349 22:35:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.349 22:35:31 event -- common/autotest_common.sh@1142 -- # return 0 00:05:16.349 22:35:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.349 22:35:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.349 22:35:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.349 22:35:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.349 22:35:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.349 ************************************ 00:05:16.349 START TEST app_repeat 00:05:16.349 ************************************ 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60117 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.349 Process app_repeat pid: 60117 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60117' 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.349 spdk_app_start Round 0 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.349 22:35:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60117 /var/tmp/spdk-nbd.sock 00:05:16.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60117 ']' 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.349 22:35:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.349 [2024-07-15 22:35:31.874668] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:16.349 [2024-07-15 22:35:31.874770] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60117 ] 00:05:16.608 [2024-07-15 22:35:32.014743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.608 [2024-07-15 22:35:32.120923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.608 [2024-07-15 22:35:32.120936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.866 [2024-07-15 22:35:32.181989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.434 22:35:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.434 22:35:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:17.434 22:35:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.693 Malloc0 00:05:17.693 22:35:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.950 Malloc1 00:05:17.950 22:35:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.950 22:35:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.208 /dev/nbd0 00:05:18.208 22:35:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.208 22:35:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.208 1+0 records in 00:05:18.208 1+0 records out 00:05:18.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409626 s, 10.0 MB/s 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.208 22:35:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:18.208 22:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.208 22:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.208 22:35:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.467 /dev/nbd1 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.467 1+0 records in 00:05:18.467 1+0 records out 00:05:18.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311179 s, 13.2 MB/s 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.467 22:35:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.467 22:35:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd0", 00:05:18.726 "bdev_name": "Malloc0" 00:05:18.726 }, 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd1", 00:05:18.726 "bdev_name": "Malloc1" 00:05:18.726 } 00:05:18.726 ]' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd0", 00:05:18.726 "bdev_name": "Malloc0" 00:05:18.726 }, 00:05:18.726 { 00:05:18.726 "nbd_device": "/dev/nbd1", 00:05:18.726 "bdev_name": "Malloc1" 00:05:18.726 } 00:05:18.726 ]' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.726 /dev/nbd1' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.726 /dev/nbd1' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.726 256+0 records in 00:05:18.726 256+0 records out 00:05:18.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503022 s, 208 MB/s 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.726 256+0 records in 00:05:18.726 256+0 records out 00:05:18.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265763 s, 39.5 MB/s 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.726 22:35:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.984 256+0 records in 00:05:18.984 256+0 records out 00:05:18.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247101 s, 42.4 MB/s 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.984 22:35:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.243 22:35:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.501 22:35:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.761 22:35:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.761 22:35:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.021 22:35:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.280 [2024-07-15 22:35:35.662530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.280 [2024-07-15 22:35:35.751173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.280 [2024-07-15 22:35:35.751183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.280 [2024-07-15 22:35:35.805616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.280 [2024-07-15 22:35:35.805721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.280 [2024-07-15 22:35:35.805735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.567 spdk_app_start Round 1 00:05:23.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.567 22:35:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.567 22:35:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.567 22:35:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60117 /var/tmp/spdk-nbd.sock 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60117 ']' 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.567 22:35:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:23.567 22:35:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.567 Malloc0 00:05:23.567 22:35:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.848 Malloc1 00:05:23.848 22:35:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.848 22:35:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.137 /dev/nbd0 00:05:24.137 22:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.137 22:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.137 1+0 records in 00:05:24.137 1+0 records out 00:05:24.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463739 s, 8.8 MB/s 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.137 22:35:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.137 22:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.137 22:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.137 22:35:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.397 /dev/nbd1 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.397 1+0 records in 00:05:24.397 1+0 records out 00:05:24.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273093 s, 15.0 MB/s 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.397 22:35:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.397 22:35:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.656 { 00:05:24.656 "nbd_device": "/dev/nbd0", 00:05:24.656 "bdev_name": "Malloc0" 00:05:24.656 }, 00:05:24.656 { 00:05:24.656 "nbd_device": "/dev/nbd1", 00:05:24.656 "bdev_name": "Malloc1" 00:05:24.656 } 00:05:24.656 ]' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.656 { 00:05:24.656 "nbd_device": "/dev/nbd0", 00:05:24.656 "bdev_name": "Malloc0" 00:05:24.656 }, 00:05:24.656 { 00:05:24.656 "nbd_device": "/dev/nbd1", 00:05:24.656 "bdev_name": "Malloc1" 00:05:24.656 } 00:05:24.656 ]' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.656 /dev/nbd1' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.656 /dev/nbd1' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.656 256+0 records in 00:05:24.656 256+0 records out 00:05:24.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755673 s, 139 MB/s 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.656 256+0 records in 00:05:24.656 256+0 records out 00:05:24.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192146 s, 54.6 MB/s 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.656 256+0 records in 00:05:24.656 256+0 records out 00:05:24.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235786 s, 44.5 MB/s 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.656 22:35:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.657 22:35:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.916 22:35:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.175 22:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.434 22:35:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.434 22:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.434 22:35:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.694 22:35:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.694 22:35:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.954 22:35:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.954 [2024-07-15 22:35:41.452793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.212 [2024-07-15 22:35:41.540945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.213 [2024-07-15 22:35:41.540956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.213 [2024-07-15 22:35:41.595552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.213 [2024-07-15 22:35:41.595690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.213 [2024-07-15 22:35:41.595705] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.739 spdk_app_start Round 2 00:05:28.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.739 22:35:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.739 22:35:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.739 22:35:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60117 /var/tmp/spdk-nbd.sock 00:05:28.739 22:35:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60117 ']' 00:05:28.739 22:35:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.739 22:35:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.739 22:35:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.739 22:35:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.739 22:35:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.996 22:35:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.996 22:35:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:28.996 22:35:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.255 Malloc0 00:05:29.255 22:35:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.513 Malloc1 00:05:29.770 22:35:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.770 22:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.771 22:35:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.771 /dev/nbd0 00:05:30.028 22:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.028 22:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.028 1+0 records in 00:05:30.028 1+0 records out 00:05:30.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650532 s, 6.3 MB/s 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.028 22:35:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.028 22:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.028 22:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.028 22:35:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.287 /dev/nbd1 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.287 1+0 records in 00:05:30.287 1+0 records out 00:05:30.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168535 s, 24.3 MB/s 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.287 22:35:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.287 22:35:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.545 { 00:05:30.545 "nbd_device": "/dev/nbd0", 00:05:30.545 "bdev_name": "Malloc0" 00:05:30.545 }, 00:05:30.545 { 00:05:30.545 "nbd_device": "/dev/nbd1", 00:05:30.545 "bdev_name": "Malloc1" 00:05:30.545 } 00:05:30.545 ]' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.545 { 00:05:30.545 "nbd_device": "/dev/nbd0", 00:05:30.545 "bdev_name": "Malloc0" 00:05:30.545 }, 00:05:30.545 { 00:05:30.545 "nbd_device": "/dev/nbd1", 00:05:30.545 "bdev_name": "Malloc1" 00:05:30.545 } 00:05:30.545 ]' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.545 /dev/nbd1' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.545 /dev/nbd1' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.545 256+0 records in 00:05:30.545 256+0 records out 00:05:30.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00762992 s, 137 MB/s 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.545 256+0 records in 00:05:30.545 256+0 records out 00:05:30.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221888 s, 47.3 MB/s 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.545 22:35:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.545 256+0 records in 00:05:30.546 256+0 records out 00:05:30.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228907 s, 45.8 MB/s 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.546 22:35:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.804 22:35:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.063 22:35:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.323 22:35:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.323 22:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.323 22:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.595 22:35:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.595 22:35:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.854 22:35:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.854 [2024-07-15 22:35:47.370336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.113 [2024-07-15 22:35:47.451998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.113 [2024-07-15 22:35:47.452008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.113 [2024-07-15 22:35:47.508054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.113 [2024-07-15 22:35:47.508186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.113 [2024-07-15 22:35:47.508201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.645 22:35:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60117 /var/tmp/spdk-nbd.sock 00:05:34.645 22:35:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60117 ']' 00:05:34.645 22:35:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.646 22:35:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.646 22:35:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.646 22:35:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.646 22:35:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:34.904 22:35:50 event.app_repeat -- event/event.sh@39 -- # killprocess 60117 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60117 ']' 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60117 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.904 22:35:50 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60117 00:05:35.162 killing process with pid 60117 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60117' 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60117 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60117 00:05:35.162 spdk_app_start is called in Round 0. 00:05:35.162 Shutdown signal received, stop current app iteration 00:05:35.162 Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 reinitialization... 00:05:35.162 spdk_app_start is called in Round 1. 00:05:35.162 Shutdown signal received, stop current app iteration 00:05:35.162 Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 reinitialization... 00:05:35.162 spdk_app_start is called in Round 2. 00:05:35.162 Shutdown signal received, stop current app iteration 00:05:35.162 Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 reinitialization... 00:05:35.162 spdk_app_start is called in Round 3. 00:05:35.162 Shutdown signal received, stop current app iteration 00:05:35.162 22:35:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.162 22:35:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.162 00:05:35.162 real 0m18.846s 00:05:35.162 user 0m42.203s 00:05:35.162 sys 0m2.829s 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.162 ************************************ 00:05:35.162 END TEST app_repeat 00:05:35.162 ************************************ 00:05:35.162 22:35:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.162 22:35:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.162 22:35:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.162 22:35:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.162 22:35:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.162 22:35:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.162 22:35:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.421 ************************************ 00:05:35.421 START TEST cpu_locks 00:05:35.421 ************************************ 00:05:35.421 22:35:50 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.421 * Looking for test storage... 00:05:35.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.421 22:35:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.421 22:35:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.421 22:35:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.421 22:35:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.421 22:35:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.421 22:35:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.421 22:35:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.421 ************************************ 00:05:35.421 START TEST default_locks 00:05:35.421 ************************************ 00:05:35.421 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:35.421 22:35:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60544 00:05:35.421 22:35:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.421 22:35:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60544 00:05:35.422 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60544 ']' 00:05:35.422 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.422 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.422 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.422 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.422 22:35:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.422 [2024-07-15 22:35:50.893099] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:35.422 [2024-07-15 22:35:50.893372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:05:35.681 [2024-07-15 22:35:51.027629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.681 [2024-07-15 22:35:51.124745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.681 [2024-07-15 22:35:51.179991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.617 22:35:51 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.617 22:35:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:36.617 22:35:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60544 00:05:36.617 22:35:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60544 00:05:36.617 22:35:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60544 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60544 ']' 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60544 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60544 00:05:36.875 killing process with pid 60544 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60544' 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60544 00:05:36.875 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60544 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60544 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60544 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.134 ERROR: process (pid: 60544) is no longer running 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60544 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60544 ']' 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.134 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60544) - No such process 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.134 00:05:37.134 real 0m1.865s 00:05:37.134 user 0m2.002s 00:05:37.134 sys 0m0.553s 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.134 22:35:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.134 ************************************ 00:05:37.392 END TEST default_locks 00:05:37.392 ************************************ 00:05:37.392 22:35:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.392 22:35:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.392 22:35:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.393 22:35:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.393 22:35:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.393 ************************************ 00:05:37.393 START TEST default_locks_via_rpc 00:05:37.393 ************************************ 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60596 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60596 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60596 ']' 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.393 22:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.393 [2024-07-15 22:35:52.821070] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:37.393 [2024-07-15 22:35:52.821176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:05:37.652 [2024-07-15 22:35:52.962083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.652 [2024-07-15 22:35:53.058765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.652 [2024-07-15 22:35:53.114886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.217 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.475 22:35:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.475 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60596 00:05:38.475 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60596 00:05:38.475 22:35:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60596 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60596 ']' 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60596 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60596 00:05:38.733 killing process with pid 60596 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60596' 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60596 00:05:38.733 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60596 00:05:39.302 ************************************ 00:05:39.302 END TEST default_locks_via_rpc 00:05:39.302 ************************************ 00:05:39.302 00:05:39.302 real 0m1.924s 00:05:39.302 user 0m2.048s 00:05:39.302 sys 0m0.591s 00:05:39.302 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.302 22:35:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.302 22:35:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:39.302 22:35:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.302 22:35:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.302 22:35:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.302 22:35:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.302 ************************************ 00:05:39.302 START TEST non_locking_app_on_locked_coremask 00:05:39.302 ************************************ 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:39.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60647 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60647 /var/tmp/spdk.sock 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60647 ']' 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.302 22:35:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.302 [2024-07-15 22:35:54.798173] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:39.302 [2024-07-15 22:35:54.798285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:05:39.561 [2024-07-15 22:35:54.939012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.561 [2024-07-15 22:35:55.051760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.561 [2024-07-15 22:35:55.110366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60663 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60663 /var/tmp/spdk2.sock 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60663 ']' 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.495 22:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.495 [2024-07-15 22:35:55.843196] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:40.495 [2024-07-15 22:35:55.843431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:05:40.495 [2024-07-15 22:35:55.986325] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.495 [2024-07-15 22:35:55.986388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.752 [2024-07-15 22:35:56.222799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.010 [2024-07-15 22:35:56.335037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.268 22:35:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.268 22:35:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.268 22:35:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60647 00:05:41.268 22:35:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60647 00:05:41.268 22:35:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60647 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60647 ']' 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60647 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60647 00:05:42.202 killing process with pid 60647 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60647' 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60647 00:05:42.202 22:35:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60647 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60663 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60663 ']' 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60663 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60663 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.137 killing process with pid 60663 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60663' 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60663 00:05:43.137 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60663 00:05:43.396 00:05:43.396 real 0m4.035s 00:05:43.396 user 0m4.425s 00:05:43.396 sys 0m1.090s 00:05:43.396 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.396 22:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.396 ************************************ 00:05:43.396 END TEST non_locking_app_on_locked_coremask 00:05:43.396 ************************************ 00:05:43.396 22:35:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.396 22:35:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.396 22:35:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.396 22:35:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.396 22:35:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.396 ************************************ 00:05:43.396 START TEST locking_app_on_unlocked_coremask 00:05:43.396 ************************************ 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60730 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60730 /var/tmp/spdk.sock 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60730 ']' 00:05:43.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.396 22:35:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.396 [2024-07-15 22:35:58.886042] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:43.396 [2024-07-15 22:35:58.886399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:05:43.655 [2024-07-15 22:35:59.026420] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.655 [2024-07-15 22:35:59.026648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.655 [2024-07-15 22:35:59.133791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.655 [2024-07-15 22:35:59.191145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60746 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60746 /var/tmp/spdk2.sock 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60746 ']' 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.592 22:35:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.592 [2024-07-15 22:35:59.885752] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:44.592 [2024-07-15 22:35:59.886079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60746 ] 00:05:44.592 [2024-07-15 22:36:00.033161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.851 [2024-07-15 22:36:00.254908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.851 [2024-07-15 22:36:00.372193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.417 22:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.417 22:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.417 22:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60746 00:05:45.417 22:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.417 22:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60746 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60730 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60730 ']' 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60730 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60730 00:05:46.351 killing process with pid 60730 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60730' 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60730 00:05:46.351 22:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60730 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60746 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60746 ']' 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60746 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60746 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.284 killing process with pid 60746 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60746' 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60746 00:05:47.284 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60746 00:05:47.542 ************************************ 00:05:47.542 END TEST locking_app_on_unlocked_coremask 00:05:47.542 ************************************ 00:05:47.542 00:05:47.542 real 0m4.097s 00:05:47.542 user 0m4.520s 00:05:47.542 sys 0m1.106s 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.542 22:36:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:47.542 22:36:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:47.542 22:36:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.542 22:36:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.542 22:36:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.542 ************************************ 00:05:47.542 START TEST locking_app_on_locked_coremask 00:05:47.542 ************************************ 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:47.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60813 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60813 /var/tmp/spdk.sock 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60813 ']' 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.542 22:36:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.542 [2024-07-15 22:36:03.022758] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:47.542 [2024-07-15 22:36:03.022838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:05:47.800 [2024-07-15 22:36:03.157680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.800 [2024-07-15 22:36:03.268394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.800 [2024-07-15 22:36:03.325244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60829 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60829 /var/tmp/spdk2.sock 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60829 /var/tmp/spdk2.sock 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60829 /var/tmp/spdk2.sock 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60829 ']' 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.734 22:36:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.734 [2024-07-15 22:36:04.037487] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:48.735 [2024-07-15 22:36:04.037830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60829 ] 00:05:48.735 [2024-07-15 22:36:04.182386] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60813 has claimed it. 00:05:48.735 [2024-07-15 22:36:04.182470] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.301 ERROR: process (pid: 60829) is no longer running 00:05:49.301 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60829) - No such process 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60813 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60813 00:05:49.301 22:36:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60813 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60813 ']' 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60813 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60813 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60813' 00:05:49.870 killing process with pid 60813 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60813 00:05:49.870 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60813 00:05:50.129 00:05:50.129 real 0m2.655s 00:05:50.129 user 0m3.025s 00:05:50.129 sys 0m0.637s 00:05:50.129 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.129 ************************************ 00:05:50.129 END TEST locking_app_on_locked_coremask 00:05:50.129 22:36:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.129 ************************************ 00:05:50.129 22:36:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.129 22:36:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:50.129 22:36:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.129 22:36:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.129 22:36:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.129 ************************************ 00:05:50.129 START TEST locking_overlapped_coremask 00:05:50.129 ************************************ 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:50.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60880 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60880 /var/tmp/spdk.sock 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60880 ']' 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.129 22:36:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.387 [2024-07-15 22:36:05.738304] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:50.387 [2024-07-15 22:36:05.738421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:05:50.387 [2024-07-15 22:36:05.878278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.645 [2024-07-15 22:36:05.994579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.645 [2024-07-15 22:36:05.994685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.645 [2024-07-15 22:36:05.994694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.645 [2024-07-15 22:36:06.052733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60898 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60898 /var/tmp/spdk2.sock 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60898 /var/tmp/spdk2.sock 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:51.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60898 /var/tmp/spdk2.sock 00:05:51.211 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60898 ']' 00:05:51.212 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.212 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.212 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.212 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.212 22:36:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.471 [2024-07-15 22:36:06.831991] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:51.471 [2024-07-15 22:36:06.832467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60898 ] 00:05:51.471 [2024-07-15 22:36:06.988437] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60880 has claimed it. 00:05:51.471 [2024-07-15 22:36:06.988623] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.042 ERROR: process (pid: 60898) is no longer running 00:05:52.042 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60898) - No such process 00:05:52.042 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.042 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:52.042 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:52.042 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.042 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.042 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60880 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60880 ']' 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60880 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60880 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60880' 00:05:52.043 killing process with pid 60880 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60880 00:05:52.043 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60880 00:05:52.611 00:05:52.611 real 0m2.254s 00:05:52.611 user 0m6.263s 00:05:52.611 sys 0m0.451s 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.611 ************************************ 00:05:52.611 END TEST locking_overlapped_coremask 00:05:52.611 ************************************ 00:05:52.611 22:36:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.611 22:36:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.611 22:36:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.611 22:36:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.611 22:36:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.611 ************************************ 00:05:52.611 START TEST locking_overlapped_coremask_via_rpc 00:05:52.611 ************************************ 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60938 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60938 /var/tmp/spdk.sock 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60938 ']' 00:05:52.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.611 22:36:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.611 [2024-07-15 22:36:08.026362] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:52.611 [2024-07-15 22:36:08.026447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60938 ] 00:05:52.611 [2024-07-15 22:36:08.151790] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.611 [2024-07-15 22:36:08.151841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.870 [2024-07-15 22:36:08.246840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.870 [2024-07-15 22:36:08.247020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.870 [2024-07-15 22:36:08.247023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.870 [2024-07-15 22:36:08.303576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.801 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60956 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60956 /var/tmp/spdk2.sock 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60956 ']' 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.802 22:36:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.802 [2024-07-15 22:36:09.096739] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:53.802 [2024-07-15 22:36:09.096852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60956 ] 00:05:53.802 [2024-07-15 22:36:09.245251] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.802 [2024-07-15 22:36:09.245334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.059 [2024-07-15 22:36:09.562280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.059 [2024-07-15 22:36:09.562457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:54.059 [2024-07-15 22:36:09.562461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.317 [2024-07-15 22:36:09.704591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.883 [2024-07-15 22:36:10.209873] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60938 has claimed it. 00:05:54.883 request: 00:05:54.883 { 00:05:54.883 "method": "framework_enable_cpumask_locks", 00:05:54.883 "req_id": 1 00:05:54.883 } 00:05:54.883 Got JSON-RPC error response 00:05:54.883 response: 00:05:54.883 { 00:05:54.883 "code": -32603, 00:05:54.883 "message": "Failed to claim CPU core: 2" 00:05:54.883 } 00:05:54.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60938 /var/tmp/spdk.sock 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60938 ']' 00:05:54.883 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.884 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.884 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.884 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.884 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60956 /var/tmp/spdk2.sock 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60956 ']' 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.142 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.400 ************************************ 00:05:55.400 END TEST locking_overlapped_coremask_via_rpc 00:05:55.400 ************************************ 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.400 00:05:55.400 real 0m2.736s 00:05:55.400 user 0m1.312s 00:05:55.400 sys 0m0.201s 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.400 22:36:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.400 22:36:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.400 22:36:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:55.400 22:36:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60938 ]] 00:05:55.400 22:36:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60938 00:05:55.400 22:36:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60938 ']' 00:05:55.400 22:36:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60938 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60938 00:05:55.401 killing process with pid 60938 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60938' 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60938 00:05:55.401 22:36:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60938 00:05:55.659 22:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60956 ]] 00:05:55.659 22:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60956 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60956 ']' 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60956 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60956 00:05:55.659 killing process with pid 60956 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60956' 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60956 00:05:55.659 22:36:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60956 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60938 ]] 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60938 00:05:56.226 Process with pid 60938 is not found 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60938 ']' 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60938 00:05:56.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60938) - No such process 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60938 is not found' 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60956 ]] 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60956 00:05:56.226 Process with pid 60956 is not found 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60956 ']' 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60956 00:05:56.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60956) - No such process 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60956 is not found' 00:05:56.226 22:36:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.226 ************************************ 00:05:56.226 END TEST cpu_locks 00:05:56.226 ************************************ 00:05:56.226 00:05:56.226 real 0m21.029s 00:05:56.226 user 0m36.656s 00:05:56.226 sys 0m5.571s 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.226 22:36:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 22:36:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.484 00:05:56.484 real 0m48.851s 00:05:56.484 user 1m34.053s 00:05:56.484 sys 0m9.170s 00:05:56.484 ************************************ 00:05:56.484 END TEST event 00:05:56.484 ************************************ 00:05:56.484 22:36:11 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.484 22:36:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 22:36:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.484 22:36:11 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:56.484 22:36:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.484 22:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.484 22:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 ************************************ 00:05:56.484 START TEST thread 00:05:56.484 ************************************ 00:05:56.484 22:36:11 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:56.484 * Looking for test storage... 00:05:56.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:56.484 22:36:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:56.484 22:36:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:56.484 22:36:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.484 22:36:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.484 ************************************ 00:05:56.484 START TEST thread_poller_perf 00:05:56.484 ************************************ 00:05:56.484 22:36:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:56.484 [2024-07-15 22:36:11.969613] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:56.484 [2024-07-15 22:36:11.969755] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61083 ] 00:05:56.743 [2024-07-15 22:36:12.107665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.743 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:56.743 [2024-07-15 22:36:12.194262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.119 ====================================== 00:05:58.119 busy:2210849652 (cyc) 00:05:58.119 total_run_count: 332000 00:05:58.119 tsc_hz: 2200000000 (cyc) 00:05:58.119 ====================================== 00:05:58.119 poller_cost: 6659 (cyc), 3026 (nsec) 00:05:58.119 00:05:58.119 real 0m1.332s 00:05:58.119 user 0m1.154s 00:05:58.119 sys 0m0.060s 00:05:58.119 22:36:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.119 ************************************ 00:05:58.119 END TEST thread_poller_perf 00:05:58.119 ************************************ 00:05:58.119 22:36:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.119 22:36:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:58.119 22:36:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.119 22:36:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:58.119 22:36:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.119 22:36:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.119 ************************************ 00:05:58.119 START TEST thread_poller_perf 00:05:58.119 ************************************ 00:05:58.119 22:36:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.119 [2024-07-15 22:36:13.359647] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:58.119 [2024-07-15 22:36:13.359793] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61114 ] 00:05:58.119 [2024-07-15 22:36:13.500869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.119 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:58.119 [2024-07-15 22:36:13.596660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.503 ====================================== 00:05:59.503 busy:2202573536 (cyc) 00:05:59.503 total_run_count: 4623000 00:05:59.503 tsc_hz: 2200000000 (cyc) 00:05:59.503 ====================================== 00:05:59.503 poller_cost: 476 (cyc), 216 (nsec) 00:05:59.503 00:05:59.503 real 0m1.347s 00:05:59.503 user 0m1.173s 00:05:59.503 sys 0m0.066s 00:05:59.503 22:36:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.503 ************************************ 00:05:59.503 END TEST thread_poller_perf 00:05:59.503 ************************************ 00:05:59.503 22:36:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.503 22:36:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:59.503 22:36:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:59.503 00:05:59.503 real 0m2.879s 00:05:59.503 user 0m2.400s 00:05:59.503 sys 0m0.246s 00:05:59.503 22:36:14 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.503 ************************************ 00:05:59.503 END TEST thread 00:05:59.503 ************************************ 00:05:59.503 22:36:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.503 22:36:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.503 22:36:14 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:59.503 22:36:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.503 22:36:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.503 22:36:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.503 ************************************ 00:05:59.503 START TEST accel 00:05:59.503 ************************************ 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:59.503 * Looking for test storage... 00:05:59.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:59.503 22:36:14 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:59.503 22:36:14 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:59.503 22:36:14 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.503 22:36:14 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61189 00:05:59.503 22:36:14 accel -- accel/accel.sh@63 -- # waitforlisten 61189 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@829 -- # '[' -z 61189 ']' 00:05:59.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.503 22:36:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.503 22:36:14 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:59.503 22:36:14 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:59.503 22:36:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.503 22:36:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.503 22:36:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.503 22:36:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.503 22:36:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.503 22:36:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:59.503 22:36:14 accel -- accel/accel.sh@41 -- # jq -r . 00:05:59.503 [2024-07-15 22:36:14.938105] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:05:59.503 [2024-07-15 22:36:14.938214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61189 ] 00:05:59.762 [2024-07-15 22:36:15.073685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.762 [2024-07-15 22:36:15.179487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.762 [2024-07-15 22:36:15.235871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@862 -- # return 0 00:06:00.699 22:36:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:00.699 22:36:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:00.699 22:36:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:00.699 22:36:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:00.699 22:36:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:00.699 22:36:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:00.699 22:36:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.699 22:36:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.699 22:36:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.699 22:36:15 accel -- accel/accel.sh@75 -- # killprocess 61189 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@948 -- # '[' -z 61189 ']' 00:06:00.699 22:36:15 accel -- common/autotest_common.sh@952 -- # kill -0 61189 00:06:00.700 22:36:15 accel -- common/autotest_common.sh@953 -- # uname 00:06:00.700 22:36:15 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.700 22:36:15 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61189 00:06:00.700 killing process with pid 61189 00:06:00.700 22:36:16 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.700 22:36:16 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.700 22:36:16 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61189' 00:06:00.700 22:36:16 accel -- common/autotest_common.sh@967 -- # kill 61189 00:06:00.700 22:36:16 accel -- common/autotest_common.sh@972 -- # wait 61189 00:06:00.960 22:36:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:00.960 22:36:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.960 22:36:16 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:00.960 22:36:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:00.960 22:36:16 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.960 22:36:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.960 22:36:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.960 22:36:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.960 ************************************ 00:06:00.960 START TEST accel_missing_filename 00:06:00.960 ************************************ 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.960 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:00.960 22:36:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:00.960 [2024-07-15 22:36:16.522632] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:00.960 [2024-07-15 22:36:16.522796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61240 ] 00:06:01.220 [2024-07-15 22:36:16.653090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.220 [2024-07-15 22:36:16.745540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.478 [2024-07-15 22:36:16.803320] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.479 [2024-07-15 22:36:16.879082] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:01.479 A filename is required. 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.479 00:06:01.479 real 0m0.492s 00:06:01.479 user 0m0.327s 00:06:01.479 sys 0m0.116s 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.479 22:36:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 ************************************ 00:06:01.479 END TEST accel_missing_filename 00:06:01.479 ************************************ 00:06:01.479 22:36:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.479 22:36:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.479 22:36:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:01.479 22:36:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.479 22:36:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 ************************************ 00:06:01.479 START TEST accel_compress_verify 00:06:01.479 ************************************ 00:06:01.479 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.479 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:01.479 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.479 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:01.479 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.479 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:01.738 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.738 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:01.738 22:36:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:01.738 [2024-07-15 22:36:17.069106] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:01.738 [2024-07-15 22:36:17.069193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61265 ] 00:06:01.738 [2024-07-15 22:36:17.202805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.738 [2024-07-15 22:36:17.300373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.996 [2024-07-15 22:36:17.357823] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.996 [2024-07-15 22:36:17.434680] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:01.996 00:06:01.996 Compression does not support the verify option, aborting. 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:01.996 ************************************ 00:06:01.996 END TEST accel_compress_verify 00:06:01.996 ************************************ 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.996 00:06:01.996 real 0m0.476s 00:06:01.996 user 0m0.295s 00:06:01.996 sys 0m0.125s 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.996 22:36:17 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:01.996 22:36:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.996 22:36:17 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:01.996 22:36:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.996 22:36:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.996 22:36:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.255 ************************************ 00:06:02.255 START TEST accel_wrong_workload 00:06:02.255 ************************************ 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:02.255 22:36:17 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:02.255 Unsupported workload type: foobar 00:06:02.255 [2024-07-15 22:36:17.595103] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:02.255 accel_perf options: 00:06:02.255 [-h help message] 00:06:02.255 [-q queue depth per core] 00:06:02.255 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:02.255 [-T number of threads per core 00:06:02.255 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:02.255 [-t time in seconds] 00:06:02.255 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:02.255 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:02.255 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:02.255 [-l for compress/decompress workloads, name of uncompressed input file 00:06:02.255 [-S for crc32c workload, use this seed value (default 0) 00:06:02.255 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:02.255 [-f for fill workload, use this BYTE value (default 255) 00:06:02.255 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:02.255 [-y verify result if this switch is on] 00:06:02.255 [-a tasks to allocate per core (default: same value as -q)] 00:06:02.255 Can be used to spread operations across a wider range of memory. 00:06:02.255 ************************************ 00:06:02.255 END TEST accel_wrong_workload 00:06:02.255 ************************************ 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.255 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.256 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.256 00:06:02.256 real 0m0.031s 00:06:02.256 user 0m0.012s 00:06:02.256 sys 0m0.019s 00:06:02.256 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.256 22:36:17 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.256 22:36:17 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.256 ************************************ 00:06:02.256 START TEST accel_negative_buffers 00:06:02.256 ************************************ 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:02.256 22:36:17 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:02.256 -x option must be non-negative. 00:06:02.256 [2024-07-15 22:36:17.672139] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:02.256 accel_perf options: 00:06:02.256 [-h help message] 00:06:02.256 [-q queue depth per core] 00:06:02.256 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:02.256 [-T number of threads per core 00:06:02.256 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:02.256 [-t time in seconds] 00:06:02.256 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:02.256 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:02.256 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:02.256 [-l for compress/decompress workloads, name of uncompressed input file 00:06:02.256 [-S for crc32c workload, use this seed value (default 0) 00:06:02.256 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:02.256 [-f for fill workload, use this BYTE value (default 255) 00:06:02.256 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:02.256 [-y verify result if this switch is on] 00:06:02.256 [-a tasks to allocate per core (default: same value as -q)] 00:06:02.256 Can be used to spread operations across a wider range of memory. 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.256 00:06:02.256 real 0m0.032s 00:06:02.256 user 0m0.017s 00:06:02.256 sys 0m0.014s 00:06:02.256 ************************************ 00:06:02.256 END TEST accel_negative_buffers 00:06:02.256 ************************************ 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.256 22:36:17 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.256 22:36:17 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.256 22:36:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.256 ************************************ 00:06:02.256 START TEST accel_crc32c 00:06:02.256 ************************************ 00:06:02.256 22:36:17 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:02.256 22:36:17 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:02.256 [2024-07-15 22:36:17.747702] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:02.256 [2024-07-15 22:36:17.747781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61329 ] 00:06:02.514 [2024-07-15 22:36:17.885164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.514 [2024-07-15 22:36:17.981631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:02.514 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.515 22:36:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:03.921 22:36:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.921 00:06:03.921 real 0m1.480s 00:06:03.921 user 0m1.269s 00:06:03.921 sys 0m0.115s 00:06:03.921 22:36:19 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.921 ************************************ 00:06:03.921 END TEST accel_crc32c 00:06:03.921 ************************************ 00:06:03.921 22:36:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:03.921 22:36:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.921 22:36:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:03.921 22:36:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:03.921 22:36:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.921 22:36:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.921 ************************************ 00:06:03.921 START TEST accel_crc32c_C2 00:06:03.921 ************************************ 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.921 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:03.921 [2024-07-15 22:36:19.281738] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:03.921 [2024-07-15 22:36:19.281849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61362 ] 00:06:03.921 [2024-07-15 22:36:19.421166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.180 [2024-07-15 22:36:19.525844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:04.180 22:36:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.558 ************************************ 00:06:05.558 END TEST accel_crc32c_C2 00:06:05.558 ************************************ 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.558 00:06:05.558 real 0m1.496s 00:06:05.558 user 0m1.290s 00:06:05.558 sys 0m0.116s 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.558 22:36:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:05.558 22:36:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.558 22:36:20 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:05.558 22:36:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.558 22:36:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.558 22:36:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.558 ************************************ 00:06:05.558 START TEST accel_copy 00:06:05.558 ************************************ 00:06:05.558 22:36:20 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:05.558 22:36:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:05.558 [2024-07-15 22:36:20.829692] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:05.558 [2024-07-15 22:36:20.829774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:06:05.558 [2024-07-15 22:36:20.963736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.558 [2024-07-15 22:36:21.067386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.817 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.818 22:36:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.752 22:36:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:06.753 22:36:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.753 00:06:06.753 real 0m1.493s 00:06:06.753 user 0m1.282s 00:06:06.753 sys 0m0.117s 00:06:06.753 22:36:22 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.753 22:36:22 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:06.753 ************************************ 00:06:06.753 END TEST accel_copy 00:06:06.753 ************************************ 00:06:07.011 22:36:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.011 22:36:22 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.011 22:36:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:07.011 22:36:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.011 22:36:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.011 ************************************ 00:06:07.011 START TEST accel_fill 00:06:07.011 ************************************ 00:06:07.011 22:36:22 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:07.011 22:36:22 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:07.011 [2024-07-15 22:36:22.387846] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:07.011 [2024-07-15 22:36:22.388780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61427 ] 00:06:07.011 [2024-07-15 22:36:22.534612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.269 [2024-07-15 22:36:22.635239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:07.269 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.270 22:36:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.642 22:36:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.643 ************************************ 00:06:08.643 END TEST accel_fill 00:06:08.643 ************************************ 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:08.643 22:36:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.643 00:06:08.643 real 0m1.512s 00:06:08.643 user 0m1.285s 00:06:08.643 sys 0m0.129s 00:06:08.643 22:36:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.643 22:36:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:08.643 22:36:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.643 22:36:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:08.643 22:36:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.643 22:36:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.643 22:36:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.643 ************************************ 00:06:08.643 START TEST accel_copy_crc32c 00:06:08.643 ************************************ 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:08.643 22:36:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:08.643 [2024-07-15 22:36:23.943759] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:08.643 [2024-07-15 22:36:23.943845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61467 ] 00:06:08.643 [2024-07-15 22:36:24.081802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.643 [2024-07-15 22:36:24.183501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.907 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.908 22:36:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.281 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:10.282 ************************************ 00:06:10.282 END TEST accel_copy_crc32c 00:06:10.282 ************************************ 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.282 00:06:10.282 real 0m1.501s 00:06:10.282 user 0m1.282s 00:06:10.282 sys 0m0.121s 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.282 22:36:25 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:10.282 22:36:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.282 22:36:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.282 22:36:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.282 22:36:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.282 22:36:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.282 ************************************ 00:06:10.282 START TEST accel_copy_crc32c_C2 00:06:10.282 ************************************ 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:10.282 [2024-07-15 22:36:25.506699] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:10.282 [2024-07-15 22:36:25.506807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61496 ] 00:06:10.282 [2024-07-15 22:36:25.646431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.282 [2024-07-15 22:36:25.771531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.282 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.541 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.542 22:36:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.479 00:06:11.479 real 0m1.530s 00:06:11.479 user 0m1.304s 00:06:11.479 sys 0m0.128s 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.479 ************************************ 00:06:11.479 END TEST accel_copy_crc32c_C2 00:06:11.479 ************************************ 00:06:11.479 22:36:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:11.739 22:36:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.739 22:36:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:11.739 22:36:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.739 22:36:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.739 22:36:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.739 ************************************ 00:06:11.739 START TEST accel_dualcast 00:06:11.739 ************************************ 00:06:11.739 22:36:27 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:11.739 22:36:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:11.739 [2024-07-15 22:36:27.089710] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:11.739 [2024-07-15 22:36:27.089806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61536 ] 00:06:11.739 [2024-07-15 22:36:27.225286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.031 [2024-07-15 22:36:27.321115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.031 22:36:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:13.411 22:36:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.411 00:06:13.411 real 0m1.497s 00:06:13.411 user 0m1.276s 00:06:13.411 sys 0m0.121s 00:06:13.411 22:36:28 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.411 ************************************ 00:06:13.411 END TEST accel_dualcast 00:06:13.411 ************************************ 00:06:13.411 22:36:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:13.411 22:36:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.411 22:36:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:13.411 22:36:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.411 22:36:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.411 22:36:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.411 ************************************ 00:06:13.411 START TEST accel_compare 00:06:13.411 ************************************ 00:06:13.411 22:36:28 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:13.411 [2024-07-15 22:36:28.633950] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:13.411 [2024-07-15 22:36:28.634022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61565 ] 00:06:13.411 [2024-07-15 22:36:28.767879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.411 [2024-07-15 22:36:28.864838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.411 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 22:36:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:14.790 22:36:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.790 00:06:14.790 real 0m1.476s 00:06:14.790 user 0m1.267s 00:06:14.790 sys 0m0.112s 00:06:14.790 22:36:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.790 ************************************ 00:06:14.790 END TEST accel_compare 00:06:14.790 ************************************ 00:06:14.790 22:36:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:14.790 22:36:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.790 22:36:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:14.790 22:36:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.790 22:36:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.790 22:36:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.790 ************************************ 00:06:14.790 START TEST accel_xor 00:06:14.790 ************************************ 00:06:14.790 22:36:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:14.790 22:36:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:14.790 [2024-07-15 22:36:30.166421] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:14.790 [2024-07-15 22:36:30.166506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61605 ] 00:06:14.790 [2024-07-15 22:36:30.304320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.050 [2024-07-15 22:36:30.395916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:15.050 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 22:36:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.429 00:06:16.429 real 0m1.490s 00:06:16.429 user 0m1.271s 00:06:16.429 sys 0m0.130s 00:06:16.429 22:36:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.429 ************************************ 00:06:16.429 END TEST accel_xor 00:06:16.429 ************************************ 00:06:16.429 22:36:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:16.429 22:36:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.429 22:36:31 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:16.429 22:36:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.429 22:36:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.429 22:36:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.429 ************************************ 00:06:16.429 START TEST accel_xor 00:06:16.429 ************************************ 00:06:16.429 22:36:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:16.429 22:36:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:16.429 [2024-07-15 22:36:31.706122] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:16.429 [2024-07-15 22:36:31.706223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61634 ] 00:06:16.429 [2024-07-15 22:36:31.845883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.429 [2024-07-15 22:36:31.951425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.688 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.689 22:36:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 ************************************ 00:06:17.624 END TEST accel_xor 00:06:17.624 ************************************ 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:17.624 22:36:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.624 00:06:17.624 real 0m1.500s 00:06:17.624 user 0m1.290s 00:06:17.624 sys 0m0.116s 00:06:17.624 22:36:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.624 22:36:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 22:36:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.884 22:36:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:17.884 22:36:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:17.884 22:36:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.884 22:36:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.884 ************************************ 00:06:17.884 START TEST accel_dif_verify 00:06:17.884 ************************************ 00:06:17.884 22:36:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:17.884 22:36:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:17.884 [2024-07-15 22:36:33.256855] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:17.884 [2024-07-15 22:36:33.256968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61674 ] 00:06:17.884 [2024-07-15 22:36:33.392365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.143 [2024-07-15 22:36:33.494091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.143 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.144 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.144 22:36:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.144 22:36:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.144 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.144 22:36:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:19.521 22:36:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.521 00:06:19.521 real 0m1.489s 00:06:19.521 user 0m1.280s 00:06:19.521 sys 0m0.117s 00:06:19.521 22:36:34 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.521 22:36:34 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:19.521 ************************************ 00:06:19.521 END TEST accel_dif_verify 00:06:19.521 ************************************ 00:06:19.521 22:36:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.521 22:36:34 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:19.521 22:36:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.521 22:36:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.521 22:36:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.521 ************************************ 00:06:19.521 START TEST accel_dif_generate 00:06:19.521 ************************************ 00:06:19.521 22:36:34 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:19.521 22:36:34 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:19.521 [2024-07-15 22:36:34.796761] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:19.521 [2024-07-15 22:36:34.796852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61703 ] 00:06:19.521 [2024-07-15 22:36:34.929363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.521 [2024-07-15 22:36:35.026331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.779 22:36:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.736 ************************************ 00:06:20.736 END TEST accel_dif_generate 00:06:20.736 ************************************ 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:20.736 22:36:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.736 00:06:20.736 real 0m1.474s 00:06:20.736 user 0m1.266s 00:06:20.736 sys 0m0.115s 00:06:20.736 22:36:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.736 22:36:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:20.736 22:36:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.736 22:36:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:20.736 22:36:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.736 22:36:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.736 22:36:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.736 ************************************ 00:06:20.736 START TEST accel_dif_generate_copy 00:06:20.736 ************************************ 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:20.736 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:20.994 [2024-07-15 22:36:36.317730] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:20.994 [2024-07-15 22:36:36.317829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61743 ] 00:06:20.995 [2024-07-15 22:36:36.452763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.253 [2024-07-15 22:36:36.569046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.253 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.254 22:36:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.631 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.632 00:06:22.632 real 0m1.511s 00:06:22.632 user 0m1.298s 00:06:22.632 sys 0m0.118s 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.632 22:36:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.632 ************************************ 00:06:22.632 END TEST accel_dif_generate_copy 00:06:22.632 ************************************ 00:06:22.632 22:36:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.632 22:36:37 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:22.632 22:36:37 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.632 22:36:37 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:22.632 22:36:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.632 22:36:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.632 ************************************ 00:06:22.632 START TEST accel_comp 00:06:22.632 ************************************ 00:06:22.632 22:36:37 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:22.632 22:36:37 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:22.632 [2024-07-15 22:36:37.875945] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:22.632 [2024-07-15 22:36:37.876023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61772 ] 00:06:22.632 [2024-07-15 22:36:38.007723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.632 [2024-07-15 22:36:38.120807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.890 22:36:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:23.823 22:36:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.823 ************************************ 00:06:23.823 END TEST accel_comp 00:06:23.823 ************************************ 00:06:23.823 00:06:23.823 real 0m1.516s 00:06:23.823 user 0m1.299s 00:06:23.823 sys 0m0.124s 00:06:23.823 22:36:39 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.823 22:36:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:24.080 22:36:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.080 22:36:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:24.080 22:36:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.080 22:36:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.080 22:36:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.080 ************************************ 00:06:24.080 START TEST accel_decomp 00:06:24.080 ************************************ 00:06:24.080 22:36:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:24.080 22:36:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:24.080 [2024-07-15 22:36:39.449683] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:24.080 [2024-07-15 22:36:39.449797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61814 ] 00:06:24.080 [2024-07-15 22:36:39.602155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.338 [2024-07-15 22:36:39.706575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.338 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.339 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.339 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.339 22:36:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.339 22:36:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.339 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.339 22:36:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.756 22:36:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.756 00:06:25.756 real 0m1.520s 00:06:25.756 user 0m1.301s 00:06:25.756 sys 0m0.126s 00:06:25.756 22:36:40 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.756 ************************************ 00:06:25.756 END TEST accel_decomp 00:06:25.756 ************************************ 00:06:25.756 22:36:40 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:25.756 22:36:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.756 22:36:40 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.756 22:36:40 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:25.756 22:36:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.756 22:36:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.756 ************************************ 00:06:25.756 START TEST accel_decomp_full 00:06:25.756 ************************************ 00:06:25.756 22:36:40 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:25.756 22:36:40 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:25.756 [2024-07-15 22:36:41.011048] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:25.756 [2024-07-15 22:36:41.011173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61843 ] 00:06:25.756 [2024-07-15 22:36:41.158626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.756 [2024-07-15 22:36:41.278257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.013 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.014 22:36:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.385 22:36:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.385 00:06:27.385 real 0m1.532s 00:06:27.385 user 0m1.316s 00:06:27.385 sys 0m0.122s 00:06:27.385 22:36:42 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.385 ************************************ 00:06:27.385 END TEST accel_decomp_full 00:06:27.385 ************************************ 00:06:27.385 22:36:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:27.385 22:36:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.385 22:36:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.385 22:36:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:27.385 22:36:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.385 22:36:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.386 ************************************ 00:06:27.386 START TEST accel_decomp_mcore 00:06:27.386 ************************************ 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:27.386 [2024-07-15 22:36:42.594121] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:27.386 [2024-07-15 22:36:42.594222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61884 ] 00:06:27.386 [2024-07-15 22:36:42.732422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.386 [2024-07-15 22:36:42.849935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.386 [2024-07-15 22:36:42.850086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.386 [2024-07-15 22:36:42.850998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.386 [2024-07-15 22:36:42.851006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.386 22:36:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.759 00:06:28.759 real 0m1.531s 00:06:28.759 user 0m4.739s 00:06:28.759 sys 0m0.128s 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.759 ************************************ 00:06:28.759 22:36:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:28.759 END TEST accel_decomp_mcore 00:06:28.759 ************************************ 00:06:28.759 22:36:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.759 22:36:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.759 22:36:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:28.759 22:36:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.759 22:36:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.759 ************************************ 00:06:28.759 START TEST accel_decomp_full_mcore 00:06:28.759 ************************************ 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:28.760 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:28.760 [2024-07-15 22:36:44.172470] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:28.760 [2024-07-15 22:36:44.172615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61916 ] 00:06:28.760 [2024-07-15 22:36:44.315718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.019 [2024-07-15 22:36:44.447865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.020 [2024-07-15 22:36:44.447998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.020 [2024-07-15 22:36:44.448810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.020 [2024-07-15 22:36:44.448823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.020 22:36:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.396 00:06:30.396 real 0m1.557s 00:06:30.396 user 0m4.800s 00:06:30.396 sys 0m0.132s 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.396 ************************************ 00:06:30.396 END TEST accel_decomp_full_mcore 00:06:30.396 ************************************ 00:06:30.396 22:36:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:30.396 22:36:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.396 22:36:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:30.396 22:36:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:30.396 22:36:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.396 22:36:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.396 ************************************ 00:06:30.396 START TEST accel_decomp_mthread 00:06:30.396 ************************************ 00:06:30.396 22:36:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:30.396 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:30.396 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:30.396 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.396 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:30.397 22:36:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:30.397 [2024-07-15 22:36:45.773802] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:30.397 [2024-07-15 22:36:45.773925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61959 ] 00:06:30.397 [2024-07-15 22:36:45.909679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.656 [2024-07-15 22:36:46.006376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.656 22:36:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.035 00:06:32.035 real 0m1.485s 00:06:32.035 user 0m1.275s 00:06:32.035 sys 0m0.119s 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.035 ************************************ 00:06:32.035 END TEST accel_decomp_mthread 00:06:32.035 22:36:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:32.035 ************************************ 00:06:32.035 22:36:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.035 22:36:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.035 22:36:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:32.035 22:36:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.035 22:36:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.035 ************************************ 00:06:32.035 START TEST accel_decomp_full_mthread 00:06:32.035 ************************************ 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:32.035 [2024-07-15 22:36:47.303017] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:32.035 [2024-07-15 22:36:47.303092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:06:32.035 [2024-07-15 22:36:47.434900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.035 [2024-07-15 22:36:47.539110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.035 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.294 22:36:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.231 00:06:33.231 real 0m1.511s 00:06:33.231 user 0m1.310s 00:06:33.231 sys 0m0.109s 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.231 ************************************ 00:06:33.231 END TEST accel_decomp_full_mthread 00:06:33.231 ************************************ 00:06:33.231 22:36:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:33.497 22:36:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.497 22:36:48 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:33.497 22:36:48 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.497 22:36:48 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:33.497 22:36:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.497 22:36:48 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.497 22:36:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.497 22:36:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.497 22:36:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.497 22:36:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.497 22:36:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.497 22:36:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.497 22:36:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:33.497 22:36:48 accel -- accel/accel.sh@41 -- # jq -r . 00:06:33.497 ************************************ 00:06:33.497 START TEST accel_dif_functional_tests 00:06:33.497 ************************************ 00:06:33.497 22:36:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.497 [2024-07-15 22:36:48.896275] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:33.497 [2024-07-15 22:36:48.896408] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62029 ] 00:06:33.497 [2024-07-15 22:36:49.032682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.756 [2024-07-15 22:36:49.144697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.756 [2024-07-15 22:36:49.144828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.756 [2024-07-15 22:36:49.144831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.756 [2024-07-15 22:36:49.199810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.756 00:06:33.756 00:06:33.756 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.756 http://cunit.sourceforge.net/ 00:06:33.756 00:06:33.756 00:06:33.756 Suite: accel_dif 00:06:33.756 Test: verify: DIF generated, GUARD check ...passed 00:06:33.756 Test: verify: DIF generated, APPTAG check ...passed 00:06:33.756 Test: verify: DIF generated, REFTAG check ...passed 00:06:33.756 Test: verify: DIF not generated, GUARD check ...passed 00:06:33.756 Test: verify: DIF not generated, APPTAG check ...passed 00:06:33.756 Test: verify: DIF not generated, REFTAG check ...passed 00:06:33.756 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:33.756 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:33.756 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:33.756 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:33.756 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-15 22:36:49.236886] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.756 [2024-07-15 22:36:49.237000] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.756 [2024-07-15 22:36:49.237041] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.756 [2024-07-15 22:36:49.237138] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:33.756 passed 00:06:33.756 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:33.756 Test: verify copy: DIF generated, GUARD check ...passed 00:06:33.756 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:33.756 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:33.756 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:33.756 Test: verify copy: DIF not generated, APPTAG check ...passed 00:06:33.756 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 22:36:49.237326] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:33.756 [2024-07-15 22:36:49.237528] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.756 [2024-07-15 22:36:49.237584] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.756 [2024-07-15 22:36:49.237626] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.756 passed 00:06:33.756 Test: generate copy: DIF generated, GUARD check ...passed 00:06:33.756 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:33.756 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:33.756 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:33.756 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:33.756 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:33.756 Test: generate copy: iovecs-len validate ...passed 00:06:33.756 Test: generate copy: buffer alignment validate ...passed 00:06:33.756 00:06:33.756 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.756 suites 1 1 n/a 0 0 00:06:33.756 tests 26 26 26 0 0 00:06:33.756 asserts 115 115 115 0 n/a 00:06:33.756 00:06:33.756 Elapsed time = 0.003 seconds 00:06:33.756 [2024-07-15 22:36:49.237955] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:34.014 00:06:34.014 real 0m0.610s 00:06:34.014 user 0m0.814s 00:06:34.014 sys 0m0.146s 00:06:34.014 22:36:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.014 ************************************ 00:06:34.014 END TEST accel_dif_functional_tests 00:06:34.014 ************************************ 00:06:34.014 22:36:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:34.014 22:36:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.014 00:06:34.014 real 0m34.711s 00:06:34.014 user 0m36.376s 00:06:34.014 sys 0m4.113s 00:06:34.014 22:36:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.014 ************************************ 00:06:34.014 22:36:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.014 END TEST accel 00:06:34.014 ************************************ 00:06:34.014 22:36:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.014 22:36:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:34.015 22:36:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.015 22:36:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.015 22:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:34.015 ************************************ 00:06:34.015 START TEST accel_rpc 00:06:34.015 ************************************ 00:06:34.015 22:36:49 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:34.273 * Looking for test storage... 00:06:34.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:34.273 22:36:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.273 22:36:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62096 00:06:34.273 22:36:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62096 00:06:34.273 22:36:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:34.273 22:36:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62096 ']' 00:06:34.273 22:36:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.273 22:36:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.273 22:36:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.273 22:36:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.273 22:36:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.273 [2024-07-15 22:36:49.691299] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:34.273 [2024-07-15 22:36:49.691409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62096 ] 00:06:34.273 [2024-07-15 22:36:49.829210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.531 [2024-07-15 22:36:49.938549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.465 22:36:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.465 22:36:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:35.465 22:36:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:35.465 22:36:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:35.465 22:36:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:35.465 22:36:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:35.465 22:36:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:35.465 22:36:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.465 22:36:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.465 22:36:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.465 ************************************ 00:06:35.465 START TEST accel_assign_opcode 00:06:35.465 ************************************ 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.465 [2024-07-15 22:36:50.723138] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.465 [2024-07-15 22:36:50.731128] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.465 [2024-07-15 22:36:50.797939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:35.465 22:36:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.465 software 00:06:35.465 00:06:35.465 real 0m0.310s 00:06:35.465 user 0m0.051s 00:06:35.465 sys 0m0.013s 00:06:35.465 22:36:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.465 22:36:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.465 ************************************ 00:06:35.465 END TEST accel_assign_opcode 00:06:35.465 ************************************ 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:35.724 22:36:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62096 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62096 ']' 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62096 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62096 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62096' 00:06:35.724 killing process with pid 62096 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@967 -- # kill 62096 00:06:35.724 22:36:51 accel_rpc -- common/autotest_common.sh@972 -- # wait 62096 00:06:35.982 00:06:35.982 real 0m1.967s 00:06:35.982 user 0m2.091s 00:06:35.982 sys 0m0.460s 00:06:35.982 22:36:51 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.982 22:36:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.982 ************************************ 00:06:35.982 END TEST accel_rpc 00:06:35.982 ************************************ 00:06:36.240 22:36:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.240 22:36:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.240 22:36:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.240 22:36:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.240 22:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.240 ************************************ 00:06:36.240 START TEST app_cmdline 00:06:36.240 ************************************ 00:06:36.240 22:36:51 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.240 * Looking for test storage... 00:06:36.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.240 22:36:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.240 22:36:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62189 00:06:36.240 22:36:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62189 00:06:36.240 22:36:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.240 22:36:51 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62189 ']' 00:06:36.241 22:36:51 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.241 22:36:51 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.241 22:36:51 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.241 22:36:51 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.241 22:36:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.241 [2024-07-15 22:36:51.746213] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:36.241 [2024-07-15 22:36:51.746365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:06:36.499 [2024-07-15 22:36:51.886318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.499 [2024-07-15 22:36:51.997950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.499 [2024-07-15 22:36:52.057657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.432 22:36:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.432 22:36:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:37.432 22:36:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:37.691 { 00:06:37.692 "version": "SPDK v24.09-pre git sha1 d608564df", 00:06:37.692 "fields": { 00:06:37.692 "major": 24, 00:06:37.692 "minor": 9, 00:06:37.692 "patch": 0, 00:06:37.692 "suffix": "-pre", 00:06:37.692 "commit": "d608564df" 00:06:37.692 } 00:06:37.692 } 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.692 22:36:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:37.692 22:36:53 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.950 request: 00:06:37.950 { 00:06:37.950 "method": "env_dpdk_get_mem_stats", 00:06:37.950 "req_id": 1 00:06:37.950 } 00:06:37.950 Got JSON-RPC error response 00:06:37.950 response: 00:06:37.950 { 00:06:37.950 "code": -32601, 00:06:37.950 "message": "Method not found" 00:06:37.950 } 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.950 22:36:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62189 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62189 ']' 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62189 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62189 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.950 killing process with pid 62189 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62189' 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@967 -- # kill 62189 00:06:37.950 22:36:53 app_cmdline -- common/autotest_common.sh@972 -- # wait 62189 00:06:38.516 00:06:38.516 real 0m2.212s 00:06:38.516 user 0m2.772s 00:06:38.516 sys 0m0.538s 00:06:38.516 22:36:53 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.516 22:36:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.516 ************************************ 00:06:38.516 END TEST app_cmdline 00:06:38.516 ************************************ 00:06:38.516 22:36:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.516 22:36:53 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:38.516 22:36:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.516 22:36:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.516 22:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.516 ************************************ 00:06:38.516 START TEST version 00:06:38.516 ************************************ 00:06:38.516 22:36:53 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:38.516 * Looking for test storage... 00:06:38.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:38.516 22:36:53 version -- app/version.sh@17 -- # get_header_version major 00:06:38.516 22:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # cut -f2 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.516 22:36:53 version -- app/version.sh@17 -- # major=24 00:06:38.516 22:36:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.516 22:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # cut -f2 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.516 22:36:53 version -- app/version.sh@18 -- # minor=9 00:06:38.516 22:36:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # cut -f2 00:06:38.516 22:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.516 22:36:53 version -- app/version.sh@19 -- # patch=0 00:06:38.516 22:36:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # cut -f2 00:06:38.516 22:36:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.516 22:36:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.516 22:36:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.516 22:36:53 version -- app/version.sh@22 -- # version=24.9 00:06:38.516 22:36:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.516 22:36:53 version -- app/version.sh@28 -- # version=24.9rc0 00:06:38.516 22:36:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:38.516 22:36:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.516 22:36:53 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:38.516 22:36:53 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:38.516 00:06:38.516 real 0m0.153s 00:06:38.516 user 0m0.094s 00:06:38.516 sys 0m0.090s 00:06:38.516 22:36:53 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.516 22:36:53 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.516 ************************************ 00:06:38.516 END TEST version 00:06:38.516 ************************************ 00:06:38.516 22:36:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.516 22:36:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:38.516 22:36:54 -- spdk/autotest.sh@198 -- # uname -s 00:06:38.516 22:36:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:38.516 22:36:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:38.516 22:36:54 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:38.516 22:36:54 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:38.516 22:36:54 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:38.516 22:36:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.516 22:36:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.516 22:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.516 ************************************ 00:06:38.516 START TEST spdk_dd 00:06:38.516 ************************************ 00:06:38.516 22:36:54 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:38.774 * Looking for test storage... 00:06:38.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:38.774 22:36:54 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.774 22:36:54 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.774 22:36:54 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.774 22:36:54 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.774 22:36:54 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 22:36:54 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 22:36:54 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 22:36:54 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:38.774 22:36:54 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.774 22:36:54 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:39.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.033 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:39.033 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:39.033 22:36:54 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:39.033 22:36:54 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:39.033 22:36:54 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:39.033 22:36:54 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:39.033 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:39.034 * spdk_dd linked to liburing 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:39.034 22:36:54 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:39.034 22:36:54 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:39.035 22:36:54 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:39.035 22:36:54 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:39.035 22:36:54 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:39.035 22:36:54 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:39.035 22:36:54 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:39.035 22:36:54 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:39.035 22:36:54 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:39.035 22:36:54 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:39.035 22:36:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.035 22:36:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.035 22:36:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:39.035 ************************************ 00:06:39.035 START TEST spdk_dd_basic_rw 00:06:39.035 ************************************ 00:06:39.035 22:36:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:39.293 * Looking for test storage... 00:06:39.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:39.293 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.553 ************************************ 00:06:39.553 START TEST dd_bs_lt_native_bs 00:06:39.553 ************************************ 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:39.553 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:39.554 22:36:54 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:39.554 { 00:06:39.554 "subsystems": [ 00:06:39.554 { 00:06:39.554 "subsystem": "bdev", 00:06:39.554 "config": [ 00:06:39.554 { 00:06:39.554 "params": { 00:06:39.554 "trtype": "pcie", 00:06:39.554 "traddr": "0000:00:10.0", 00:06:39.554 "name": "Nvme0" 00:06:39.554 }, 00:06:39.554 "method": "bdev_nvme_attach_controller" 00:06:39.554 }, 00:06:39.554 { 00:06:39.554 "method": "bdev_wait_for_examine" 00:06:39.554 } 00:06:39.554 ] 00:06:39.554 } 00:06:39.554 ] 00:06:39.554 } 00:06:39.554 [2024-07-15 22:36:54.934799] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:39.554 [2024-07-15 22:36:54.934904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62509 ] 00:06:39.554 [2024-07-15 22:36:55.073802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.811 [2024-07-15 22:36:55.209874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.811 [2024-07-15 22:36:55.274616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.069 [2024-07-15 22:36:55.390030] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:40.069 [2024-07-15 22:36:55.390128] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.069 [2024-07-15 22:36:55.518441] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.069 00:06:40.069 real 0m0.746s 00:06:40.069 user 0m0.521s 00:06:40.069 sys 0m0.179s 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.069 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:40.069 ************************************ 00:06:40.069 END TEST dd_bs_lt_native_bs 00:06:40.069 ************************************ 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 ************************************ 00:06:40.326 START TEST dd_rw 00:06:40.326 ************************************ 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:40.326 22:36:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.903 22:36:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:40.903 22:36:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.903 22:36:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.903 22:36:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.903 [2024-07-15 22:36:56.379412] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:40.903 [2024-07-15 22:36:56.379584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62545 ] 00:06:40.903 { 00:06:40.903 "subsystems": [ 00:06:40.903 { 00:06:40.903 "subsystem": "bdev", 00:06:40.903 "config": [ 00:06:40.903 { 00:06:40.903 "params": { 00:06:40.903 "trtype": "pcie", 00:06:40.903 "traddr": "0000:00:10.0", 00:06:40.903 "name": "Nvme0" 00:06:40.903 }, 00:06:40.903 "method": "bdev_nvme_attach_controller" 00:06:40.903 }, 00:06:40.903 { 00:06:40.903 "method": "bdev_wait_for_examine" 00:06:40.903 } 00:06:40.903 ] 00:06:40.903 } 00:06:40.903 ] 00:06:40.903 } 00:06:41.161 [2024-07-15 22:36:56.525043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.161 [2024-07-15 22:36:56.672204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.418 [2024-07-15 22:36:56.741980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.676  Copying: 60/60 [kB] (average 29 MBps) 00:06:41.676 00:06:41.676 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:41.676 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:41.676 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.676 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 { 00:06:41.676 "subsystems": [ 00:06:41.676 { 00:06:41.676 "subsystem": "bdev", 00:06:41.676 "config": [ 00:06:41.676 { 00:06:41.676 "params": { 00:06:41.676 "trtype": "pcie", 00:06:41.676 "traddr": "0000:00:10.0", 00:06:41.676 "name": "Nvme0" 00:06:41.676 }, 00:06:41.676 "method": "bdev_nvme_attach_controller" 00:06:41.676 }, 00:06:41.676 { 00:06:41.676 "method": "bdev_wait_for_examine" 00:06:41.676 } 00:06:41.676 ] 00:06:41.676 } 00:06:41.676 ] 00:06:41.676 } 00:06:41.676 [2024-07-15 22:36:57.171121] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:41.676 [2024-07-15 22:36:57.171230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:06:41.934 [2024-07-15 22:36:57.312151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.934 [2024-07-15 22:36:57.445985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.192 [2024-07-15 22:36:57.508237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.451  Copying: 60/60 [kB] (average 19 MBps) 00:06:42.451 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.451 22:36:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.451 [2024-07-15 22:36:57.899089] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:42.451 [2024-07-15 22:36:57.899189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62580 ] 00:06:42.451 { 00:06:42.451 "subsystems": [ 00:06:42.451 { 00:06:42.451 "subsystem": "bdev", 00:06:42.451 "config": [ 00:06:42.451 { 00:06:42.451 "params": { 00:06:42.451 "trtype": "pcie", 00:06:42.451 "traddr": "0000:00:10.0", 00:06:42.451 "name": "Nvme0" 00:06:42.451 }, 00:06:42.451 "method": "bdev_nvme_attach_controller" 00:06:42.451 }, 00:06:42.451 { 00:06:42.451 "method": "bdev_wait_for_examine" 00:06:42.451 } 00:06:42.451 ] 00:06:42.451 } 00:06:42.451 ] 00:06:42.451 } 00:06:42.710 [2024-07-15 22:36:58.038589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.710 [2024-07-15 22:36:58.152500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.710 [2024-07-15 22:36:58.208740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.228  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:43.228 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:43.228 22:36:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.796 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:43.796 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:43.796 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.796 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.796 [2024-07-15 22:36:59.292938] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:43.796 [2024-07-15 22:36:59.293206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62599 ] 00:06:43.796 { 00:06:43.796 "subsystems": [ 00:06:43.796 { 00:06:43.796 "subsystem": "bdev", 00:06:43.796 "config": [ 00:06:43.796 { 00:06:43.796 "params": { 00:06:43.796 "trtype": "pcie", 00:06:43.796 "traddr": "0000:00:10.0", 00:06:43.796 "name": "Nvme0" 00:06:43.796 }, 00:06:43.796 "method": "bdev_nvme_attach_controller" 00:06:43.796 }, 00:06:43.796 { 00:06:43.796 "method": "bdev_wait_for_examine" 00:06:43.796 } 00:06:43.796 ] 00:06:43.796 } 00:06:43.796 ] 00:06:43.796 } 00:06:44.055 [2024-07-15 22:36:59.438506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.055 [2024-07-15 22:36:59.570794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.314 [2024-07-15 22:36:59.639204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.573  Copying: 60/60 [kB] (average 58 MBps) 00:06:44.573 00:06:44.573 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:44.573 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:44.573 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.573 22:36:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.573 { 00:06:44.573 "subsystems": [ 00:06:44.573 { 00:06:44.573 "subsystem": "bdev", 00:06:44.573 "config": [ 00:06:44.573 { 00:06:44.573 "params": { 00:06:44.573 "trtype": "pcie", 00:06:44.573 "traddr": "0000:00:10.0", 00:06:44.573 "name": "Nvme0" 00:06:44.573 }, 00:06:44.573 "method": "bdev_nvme_attach_controller" 00:06:44.573 }, 00:06:44.573 { 00:06:44.573 "method": "bdev_wait_for_examine" 00:06:44.573 } 00:06:44.573 ] 00:06:44.573 } 00:06:44.573 ] 00:06:44.573 } 00:06:44.573 [2024-07-15 22:37:00.023069] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:44.573 [2024-07-15 22:37:00.023166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62618 ] 00:06:44.831 [2024-07-15 22:37:00.163905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.831 [2024-07-15 22:37:00.289329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.831 [2024-07-15 22:37:00.352648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.390  Copying: 60/60 [kB] (average 29 MBps) 00:06:45.390 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.390 22:37:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.390 [2024-07-15 22:37:00.763017] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:45.390 [2024-07-15 22:37:00.763121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62639 ] 00:06:45.390 { 00:06:45.390 "subsystems": [ 00:06:45.390 { 00:06:45.390 "subsystem": "bdev", 00:06:45.390 "config": [ 00:06:45.390 { 00:06:45.390 "params": { 00:06:45.390 "trtype": "pcie", 00:06:45.390 "traddr": "0000:00:10.0", 00:06:45.390 "name": "Nvme0" 00:06:45.390 }, 00:06:45.390 "method": "bdev_nvme_attach_controller" 00:06:45.390 }, 00:06:45.390 { 00:06:45.390 "method": "bdev_wait_for_examine" 00:06:45.390 } 00:06:45.390 ] 00:06:45.390 } 00:06:45.390 ] 00:06:45.390 } 00:06:45.390 [2024-07-15 22:37:00.901686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.712 [2024-07-15 22:37:01.015789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.712 [2024-07-15 22:37:01.070218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.971  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.971 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:45.971 22:37:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.538 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:46.539 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:46.539 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.539 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.539 [2024-07-15 22:37:02.054527] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:46.539 [2024-07-15 22:37:02.054634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62658 ] 00:06:46.539 { 00:06:46.539 "subsystems": [ 00:06:46.539 { 00:06:46.539 "subsystem": "bdev", 00:06:46.539 "config": [ 00:06:46.539 { 00:06:46.539 "params": { 00:06:46.539 "trtype": "pcie", 00:06:46.539 "traddr": "0000:00:10.0", 00:06:46.539 "name": "Nvme0" 00:06:46.539 }, 00:06:46.539 "method": "bdev_nvme_attach_controller" 00:06:46.539 }, 00:06:46.539 { 00:06:46.539 "method": "bdev_wait_for_examine" 00:06:46.539 } 00:06:46.539 ] 00:06:46.539 } 00:06:46.539 ] 00:06:46.539 } 00:06:46.798 [2024-07-15 22:37:02.190664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.798 [2024-07-15 22:37:02.300910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.798 [2024-07-15 22:37:02.356887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.315  Copying: 56/56 [kB] (average 54 MBps) 00:06:47.315 00:06:47.315 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:47.315 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:47.315 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.315 22:37:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.315 { 00:06:47.315 "subsystems": [ 00:06:47.315 { 00:06:47.315 "subsystem": "bdev", 00:06:47.315 "config": [ 00:06:47.315 { 00:06:47.315 "params": { 00:06:47.315 "trtype": "pcie", 00:06:47.315 "traddr": "0000:00:10.0", 00:06:47.315 "name": "Nvme0" 00:06:47.315 }, 00:06:47.315 "method": "bdev_nvme_attach_controller" 00:06:47.315 }, 00:06:47.315 { 00:06:47.315 "method": "bdev_wait_for_examine" 00:06:47.315 } 00:06:47.315 ] 00:06:47.315 } 00:06:47.315 ] 00:06:47.315 } 00:06:47.315 [2024-07-15 22:37:02.741978] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:47.315 [2024-07-15 22:37:02.742078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62677 ] 00:06:47.315 [2024-07-15 22:37:02.881960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.573 [2024-07-15 22:37:02.994647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.573 [2024-07-15 22:37:03.048080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.832  Copying: 56/56 [kB] (average 27 MBps) 00:06:47.832 00:06:47.832 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.090 22:37:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.090 { 00:06:48.090 "subsystems": [ 00:06:48.090 { 00:06:48.090 "subsystem": "bdev", 00:06:48.090 "config": [ 00:06:48.090 { 00:06:48.090 "params": { 00:06:48.090 "trtype": "pcie", 00:06:48.090 "traddr": "0000:00:10.0", 00:06:48.090 "name": "Nvme0" 00:06:48.090 }, 00:06:48.090 "method": "bdev_nvme_attach_controller" 00:06:48.090 }, 00:06:48.090 { 00:06:48.090 "method": "bdev_wait_for_examine" 00:06:48.090 } 00:06:48.090 ] 00:06:48.090 } 00:06:48.090 ] 00:06:48.090 } 00:06:48.090 [2024-07-15 22:37:03.454213] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:48.090 [2024-07-15 22:37:03.454546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:06:48.090 [2024-07-15 22:37:03.596545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.348 [2024-07-15 22:37:03.714867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.348 [2024-07-15 22:37:03.770979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.606  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:48.606 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:48.606 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.174 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:49.174 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.174 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.174 22:37:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.451 [2024-07-15 22:37:04.743959] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:49.451 [2024-07-15 22:37:04.744058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62712 ] 00:06:49.451 { 00:06:49.451 "subsystems": [ 00:06:49.451 { 00:06:49.451 "subsystem": "bdev", 00:06:49.451 "config": [ 00:06:49.451 { 00:06:49.451 "params": { 00:06:49.451 "trtype": "pcie", 00:06:49.451 "traddr": "0000:00:10.0", 00:06:49.451 "name": "Nvme0" 00:06:49.451 }, 00:06:49.451 "method": "bdev_nvme_attach_controller" 00:06:49.451 }, 00:06:49.451 { 00:06:49.451 "method": "bdev_wait_for_examine" 00:06:49.451 } 00:06:49.451 ] 00:06:49.451 } 00:06:49.451 ] 00:06:49.451 } 00:06:49.451 [2024-07-15 22:37:04.884643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.451 [2024-07-15 22:37:05.000401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.728 [2024-07-15 22:37:05.058619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.986  Copying: 56/56 [kB] (average 54 MBps) 00:06:49.986 00:06:49.986 22:37:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:49.986 22:37:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:49.986 22:37:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.986 22:37:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.986 [2024-07-15 22:37:05.455602] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:49.986 [2024-07-15 22:37:05.455948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:06:49.986 { 00:06:49.986 "subsystems": [ 00:06:49.986 { 00:06:49.986 "subsystem": "bdev", 00:06:49.986 "config": [ 00:06:49.986 { 00:06:49.986 "params": { 00:06:49.986 "trtype": "pcie", 00:06:49.986 "traddr": "0000:00:10.0", 00:06:49.986 "name": "Nvme0" 00:06:49.986 }, 00:06:49.986 "method": "bdev_nvme_attach_controller" 00:06:49.986 }, 00:06:49.986 { 00:06:49.986 "method": "bdev_wait_for_examine" 00:06:49.986 } 00:06:49.986 ] 00:06:49.986 } 00:06:49.986 ] 00:06:49.986 } 00:06:50.244 [2024-07-15 22:37:05.594794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.244 [2024-07-15 22:37:05.704098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.244 [2024-07-15 22:37:05.759196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.762  Copying: 56/56 [kB] (average 54 MBps) 00:06:50.762 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.762 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.762 [2024-07-15 22:37:06.151434] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:50.762 [2024-07-15 22:37:06.151546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62746 ] 00:06:50.762 { 00:06:50.762 "subsystems": [ 00:06:50.762 { 00:06:50.762 "subsystem": "bdev", 00:06:50.762 "config": [ 00:06:50.762 { 00:06:50.762 "params": { 00:06:50.762 "trtype": "pcie", 00:06:50.762 "traddr": "0000:00:10.0", 00:06:50.762 "name": "Nvme0" 00:06:50.762 }, 00:06:50.762 "method": "bdev_nvme_attach_controller" 00:06:50.762 }, 00:06:50.762 { 00:06:50.762 "method": "bdev_wait_for_examine" 00:06:50.762 } 00:06:50.762 ] 00:06:50.762 } 00:06:50.762 ] 00:06:50.762 } 00:06:50.762 [2024-07-15 22:37:06.288883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.020 [2024-07-15 22:37:06.390036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.020 [2024-07-15 22:37:06.445557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.278  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:51.278 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:51.278 22:37:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:51.844 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:51.844 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.844 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 [2024-07-15 22:37:07.344780] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:51.844 [2024-07-15 22:37:07.345036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62765 ] 00:06:51.844 { 00:06:51.844 "subsystems": [ 00:06:51.844 { 00:06:51.844 "subsystem": "bdev", 00:06:51.844 "config": [ 00:06:51.844 { 00:06:51.844 "params": { 00:06:51.844 "trtype": "pcie", 00:06:51.844 "traddr": "0000:00:10.0", 00:06:51.844 "name": "Nvme0" 00:06:51.844 }, 00:06:51.844 "method": "bdev_nvme_attach_controller" 00:06:51.844 }, 00:06:51.844 { 00:06:51.844 "method": "bdev_wait_for_examine" 00:06:51.844 } 00:06:51.844 ] 00:06:51.844 } 00:06:51.844 ] 00:06:51.844 } 00:06:52.102 [2024-07-15 22:37:07.482378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.102 [2024-07-15 22:37:07.585452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.102 [2024-07-15 22:37:07.639753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.620  Copying: 48/48 [kB] (average 46 MBps) 00:06:52.620 00:06:52.620 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:52.620 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:52.620 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.620 22:37:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.620 [2024-07-15 22:37:08.003781] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:52.620 [2024-07-15 22:37:08.003887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62784 ] 00:06:52.620 { 00:06:52.620 "subsystems": [ 00:06:52.620 { 00:06:52.620 "subsystem": "bdev", 00:06:52.620 "config": [ 00:06:52.620 { 00:06:52.620 "params": { 00:06:52.620 "trtype": "pcie", 00:06:52.620 "traddr": "0000:00:10.0", 00:06:52.620 "name": "Nvme0" 00:06:52.620 }, 00:06:52.620 "method": "bdev_nvme_attach_controller" 00:06:52.620 }, 00:06:52.620 { 00:06:52.620 "method": "bdev_wait_for_examine" 00:06:52.620 } 00:06:52.620 ] 00:06:52.620 } 00:06:52.620 ] 00:06:52.620 } 00:06:52.620 [2024-07-15 22:37:08.141765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.882 [2024-07-15 22:37:08.243402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.882 [2024-07-15 22:37:08.296367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.140  Copying: 48/48 [kB] (average 46 MBps) 00:06:53.140 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.140 22:37:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.140 [2024-07-15 22:37:08.685407] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:53.140 [2024-07-15 22:37:08.685526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62794 ] 00:06:53.140 { 00:06:53.140 "subsystems": [ 00:06:53.140 { 00:06:53.140 "subsystem": "bdev", 00:06:53.140 "config": [ 00:06:53.140 { 00:06:53.140 "params": { 00:06:53.140 "trtype": "pcie", 00:06:53.141 "traddr": "0000:00:10.0", 00:06:53.141 "name": "Nvme0" 00:06:53.141 }, 00:06:53.141 "method": "bdev_nvme_attach_controller" 00:06:53.141 }, 00:06:53.141 { 00:06:53.141 "method": "bdev_wait_for_examine" 00:06:53.141 } 00:06:53.141 ] 00:06:53.141 } 00:06:53.141 ] 00:06:53.141 } 00:06:53.399 [2024-07-15 22:37:08.825273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.399 [2024-07-15 22:37:08.935393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.659 [2024-07-15 22:37:08.988216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.918  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:53.918 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:53.918 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.486 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:54.486 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:54.486 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.486 22:37:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.486 [2024-07-15 22:37:09.875143] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:54.486 [2024-07-15 22:37:09.875947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62819 ] 00:06:54.486 { 00:06:54.486 "subsystems": [ 00:06:54.486 { 00:06:54.486 "subsystem": "bdev", 00:06:54.486 "config": [ 00:06:54.486 { 00:06:54.486 "params": { 00:06:54.486 "trtype": "pcie", 00:06:54.486 "traddr": "0000:00:10.0", 00:06:54.486 "name": "Nvme0" 00:06:54.486 }, 00:06:54.486 "method": "bdev_nvme_attach_controller" 00:06:54.486 }, 00:06:54.486 { 00:06:54.486 "method": "bdev_wait_for_examine" 00:06:54.486 } 00:06:54.486 ] 00:06:54.486 } 00:06:54.486 ] 00:06:54.486 } 00:06:54.486 [2024-07-15 22:37:10.016002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.744 [2024-07-15 22:37:10.137634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.744 [2024-07-15 22:37:10.193598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.003  Copying: 48/48 [kB] (average 46 MBps) 00:06:55.003 00:06:55.003 22:37:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:55.003 22:37:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:55.003 22:37:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.003 22:37:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.262 [2024-07-15 22:37:10.579129] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:55.262 [2024-07-15 22:37:10.579260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62832 ] 00:06:55.262 { 00:06:55.262 "subsystems": [ 00:06:55.262 { 00:06:55.262 "subsystem": "bdev", 00:06:55.262 "config": [ 00:06:55.262 { 00:06:55.262 "params": { 00:06:55.262 "trtype": "pcie", 00:06:55.262 "traddr": "0000:00:10.0", 00:06:55.262 "name": "Nvme0" 00:06:55.262 }, 00:06:55.262 "method": "bdev_nvme_attach_controller" 00:06:55.262 }, 00:06:55.262 { 00:06:55.262 "method": "bdev_wait_for_examine" 00:06:55.262 } 00:06:55.262 ] 00:06:55.262 } 00:06:55.262 ] 00:06:55.262 } 00:06:55.262 [2024-07-15 22:37:10.717014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.521 [2024-07-15 22:37:10.838074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.521 [2024-07-15 22:37:10.893520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.780  Copying: 48/48 [kB] (average 46 MBps) 00:06:55.780 00:06:55.780 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.780 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:55.780 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:55.780 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:55.780 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:55.781 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:55.781 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:55.781 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:55.781 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:55.781 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:55.781 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:55.781 [2024-07-15 22:37:11.301784] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:55.781 [2024-07-15 22:37:11.302214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62853 ] 00:06:55.781 { 00:06:55.781 "subsystems": [ 00:06:55.781 { 00:06:55.781 "subsystem": "bdev", 00:06:55.781 "config": [ 00:06:55.781 { 00:06:55.781 "params": { 00:06:55.781 "trtype": "pcie", 00:06:55.781 "traddr": "0000:00:10.0", 00:06:55.781 "name": "Nvme0" 00:06:55.781 }, 00:06:55.781 "method": "bdev_nvme_attach_controller" 00:06:55.781 }, 00:06:55.781 { 00:06:55.781 "method": "bdev_wait_for_examine" 00:06:55.781 } 00:06:55.781 ] 00:06:55.781 } 00:06:55.781 ] 00:06:55.781 } 00:06:56.040 [2024-07-15 22:37:11.441744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.040 [2024-07-15 22:37:11.563511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.299 [2024-07-15 22:37:11.619152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.558  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:56.558 00:06:56.558 00:06:56.558 real 0m16.264s 00:06:56.558 user 0m12.171s 00:06:56.558 sys 0m5.605s 00:06:56.558 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.558 ************************************ 00:06:56.558 END TEST dd_rw 00:06:56.558 ************************************ 00:06:56.558 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.558 22:37:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:56.558 22:37:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:56.558 22:37:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.558 22:37:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.559 22:37:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.559 ************************************ 00:06:56.559 START TEST dd_rw_offset 00:06:56.559 ************************************ 00:06:56.559 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:56.559 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:56.559 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:56.559 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:56.559 22:37:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:56.559 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:56.559 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=v30rles79hc9uju7lvg66149j8mhng6djhppx301tnld7oxfgsejwtm4l18t0qxfbgov8k6cfw4cjq6dlfmydythlq5gwhm20r7m15zkxu6n3vmk7k0lez0d4mfio2b3oo1kiwii5xvlk9kqpx872mgwga9bl2kz6amkitk99d64xal86zm0lfqf6o9y4gc7wi3cj96u6018tmfmck8xmip8cxzpie56f2flq8g0jmdunvpkfkp9vy3biesumm5iocadsiaarr8cw0rvjuvk6h3t1wt1lgfrcqxzescwphn00t92oce2v2cuquous432399q5fx5g473c15h09zzprkmy1odtytu50zb62rl85jjug7s79fjs3b4016dtxp01zqypdd250qbzve2eoxa2beby64jsxsurlw39h4nvgoc9bd478rq906dbbno5ega4vjrjzx7qjp1ofz1yubs1iz6xou7cwzk8umya6e8ko3bjh7s36n297jt3jt6c73tse96m0hjtdl5mpw407hdyiubzpnlnc0du8p48ufmmflwj7uiptaxe0999d48zlkiju3z3fackqqa8668w3ze7eda1lw1cqcmcfdnzcjokqb2c9tkmmw4wo1jsntwsigixdcutshmms9txnf0ude80hcnlhl184ccw04f5gmzz26xqu8eiknj6wc75wn78prytygvizv4ubz1sur3imznjr4egh27ceakz2g4sdrq6i43zyocmh7w8uad6gv250jl3eil9yea58jrcivy4sinwfsj1vv5mp4jq2h07bblz3wni3q6ex90xo0h0q7plvs2sm12y8pv4pnuvha2b6j65xuc9g18seax6a9nfjlpv4g8apqrl8as6u98c2c000udwt0oaj4sx23quail1ftkdd7kekga1d6170h7n41wg88wkoobzg9okyd8fqsugr2b8xtkz8cj29lvnu38y5fkolk8ijktz63xob2d9p5k2vk9ysbydxyxts7bxrc1l8vll3iqscn2ry31ogkvec3f3vrz0m2puiws2nn5z3w86uuy4mimvzefr50d11ib828rs78cji4h3ct26o6h3y7nwdt5h0ummy8fpzainlxbrgj380phtp5zb1y6c1q05shkjul4o2ldhgvpmjp731aj0cp29195ilvlgtvzcuq90v1h6ke695rt62bgv0bgqbhmlqq08srloy79idakcl5263ab6iv9tevrysx6xrc3ccy0bs5lvdtjqoqbxrjxadq1j6s1ldnllwdz58yxzl4jrsgqto7kei99dz55qwvh9yyudq9jw4tz8xcwk5erpmpl2hq1athioz9tb8ihutj3bfzc7tamrfzvb3pl25lnwmrfb0z2ezj64dadrmvmr5lt1biz9jsjqh756o59s55zq0opukzgz04nfiyw04hutos5iiao6vp8hswa405bhy2qiy9xct6vmptuekl2pyykv1iinjjadlpzi86ap87qdtflj2xm2jffma38gcn1nda6ar2qdl734bquop6y5jtm8unpyfpnptlqv3vhaxp9iygrmjt2hluc1nvou9kc86h8chlnr4nuq7gv9msitv10or2an8pbcn480l6umthjued71ryakfs5favdnqjx5i4jkuoqbv5a8psxkub9e8trhxutbtdzbt0p3rxmyo54xqpwpecqpahd0eerj61gu3gmyfrc429udunlbzv031ngl0wuux0c0yoplg5sfbz6385cmduhsoh4s2z7zqinxrkbovrdkmpqf0o6bx7dk0o7vt7ns479sy2r2wz3zrvw0kxqb9j8611ggm3797dnlfkimwyrv5lodexb6yzuttufxr1y6frz419dg6kh5gt2dn1z2lssn4wimxgzukowbrx13u6zdvh6g430ktxu14fxx1j0j6pgy2i6es6478v0zdfnqipg7wijneybmbr1im85da5y4gmzxxs3ycn11eygz6h7z11c7dp6jva8cbd8bf08heiuoycvlbf4tb4vrkek9nqszsw0341d62ft3ryyorjb7ia6h43pydq7tek0ynv2w3oiphdt1ysdunqvwd9z6wpy3t50n2gbciaveuwd8djxhrt1l9gh8hwoi6b8hw9tzuuo3zih5hb3wwji3b5q9pg8s4nqi015whfy950jm5on1jnb44rs6d3vy9e5ey146i464lv3apzr2wdyxgac9u3t5zs4nfdv4thzhncwz76c36a1gmh0qhh7m6upkpcuce5roc8w8jpvayr60mta08ajgwb4ehk7vbqwlpz4q3gw9ncuew2myvuvv0c4bpobvpwnyrrj6p19a064dzmhjdtiqc208adsc8dzti0szquxrtgvvofhnitnvsh97lgxcoya8z9da00j0p8op8yedta071fwt4r8ph7nb8d3ygl2oolvfrfs6lxzdhvmhci0h70bu9bz072bk6mwra1y03lhkh1a5r27w38xt1fipnb3jaz36dczp8fkvdn3knui7fc5ghne2nounwgze61apz6mi3th68eg1ucl5zmas0yvtkvd6qd5yupk1fagboyw03jfbj084teo3lav5e5f9odmlreia1qcpln5pqpt8j8sdfkiks6ifzgc4jz2wcik6jxi9q5jjc1ea9n07eh5rig00n2sxr2py1fjue5ol3kjhqwciqbpwvl8a2utaq87t861hv4l7ufvtrhui8f9p3wig93ynueblgyg4t53mmmfcto1eczj6ulnlsxqlssas4u9rx7xyztjh3gh1ambcl66bcvi7b0lzkbnbxstwi1yxg31r2dhoh5kibejjem8zv8kvl2j9k5wn9ndwdyk5d6223hhh7qrpjyw9qyhamvsfnujljmi4qr3yijfzwvedkoj36a5jvetlg1kvu5ljthj04p2mq9epy8rh2c04t4tm503d18yyfjzuvhj7wkx2pk994tb7n2a6euosbdkz5s0hkivfvlnmm7qdz9tr6c9yk3ks78plnrvu1a27hkvd92zudwiyavdjqx2gix68b7rij8n6jxmg04ndgddm2e5ous0aza9nz8ixzabyk0b8mxqsj5z8fzenxn3ol4y2d8f85tdl3ar6xkod0srne5vbqz8drubrev6euabt3revptoyp86n84qphvndtjd5qaw7id2cmdm5idwapelaiq7rnvxggrrpyj25jwrdmgysvzzoxtcksqy7agmgm7r0ttn4cpiww8wgiqsmrsaf9unbrqk8qi8ac1p0017slbttwnomxn6d470b8o36jfdx79zejppskufa55msuhr028z06jf07wvg2k8h2tb9cr8w5fhimzsy1fprl15ia95rku1l100s6kor36u45qko75pfjgvck1q7t0oivzvpdok8ack0zt1bw60ry099mezyxfbn05n478n1km39wro9knqlwlvgh13e9u4r0wd1uvzp1kj4kc0mka7mqowidvdv9cwaxacw5lpzy351vc8xkr4v52464d38q3hjzve02id68mo6zawde1ddcgkru3n7d5tx1qhox81ebr3rq43h23rktia89rbesetyb7008x573q531huaej4x0ss0ggf1791dc2vpucjd7dr4or3wghscnd0qyhbbgi1aa50vf8zl1qrlnc757nzc99qax1h72n8iqshxlu8ph0kap469c33hupc6m5ep386qp0ejyy028ekqh1wezcwpoahei71ff3v2iwcsckmx5rnte6o1hoaupc51z1l9fln0e9o80c782lke2wf1wiozifckycbbb973ytvogvmhk4jfadoqoaq4yec1vzp3q8hm8g2uqnvo4c5ng9vznu5kl6iq3hvdy3wy6w5vqlo6lsin1mv0m81md8jl7p1hk2yffiz0tcm7huy2s908s8acfl7dgmlz7d2uduibfua2v4h0xd8nxap9pbpbuudlhk2i5ay9lfqi84x60eswvt4sc2wgdzttkiav7j40oqhcdm82m7z8asokod0cvpl8hhrloxol3cg53am0hf57c09md65klrp 00:06:56.559 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:56.559 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:56.559 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:56.559 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:56.559 [2024-07-15 22:37:12.091157] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:56.559 [2024-07-15 22:37:12.091261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62889 ] 00:06:56.559 { 00:06:56.559 "subsystems": [ 00:06:56.559 { 00:06:56.559 "subsystem": "bdev", 00:06:56.559 "config": [ 00:06:56.559 { 00:06:56.559 "params": { 00:06:56.559 "trtype": "pcie", 00:06:56.559 "traddr": "0000:00:10.0", 00:06:56.559 "name": "Nvme0" 00:06:56.559 }, 00:06:56.559 "method": "bdev_nvme_attach_controller" 00:06:56.559 }, 00:06:56.559 { 00:06:56.559 "method": "bdev_wait_for_examine" 00:06:56.559 } 00:06:56.559 ] 00:06:56.559 } 00:06:56.559 ] 00:06:56.559 } 00:06:56.820 [2024-07-15 22:37:12.230484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.820 [2024-07-15 22:37:12.324524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.820 [2024-07-15 22:37:12.379965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.338  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:57.338 00:06:57.338 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:57.338 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:57.338 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:57.338 22:37:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:57.338 [2024-07-15 22:37:12.737364] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:57.338 [2024-07-15 22:37:12.737462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62897 ] 00:06:57.338 { 00:06:57.338 "subsystems": [ 00:06:57.338 { 00:06:57.338 "subsystem": "bdev", 00:06:57.338 "config": [ 00:06:57.338 { 00:06:57.338 "params": { 00:06:57.338 "trtype": "pcie", 00:06:57.338 "traddr": "0000:00:10.0", 00:06:57.338 "name": "Nvme0" 00:06:57.338 }, 00:06:57.338 "method": "bdev_nvme_attach_controller" 00:06:57.338 }, 00:06:57.338 { 00:06:57.338 "method": "bdev_wait_for_examine" 00:06:57.338 } 00:06:57.338 ] 00:06:57.338 } 00:06:57.338 ] 00:06:57.338 } 00:06:57.338 [2024-07-15 22:37:12.872768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.616 [2024-07-15 22:37:12.980099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.616 [2024-07-15 22:37:13.033222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.876  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:57.876 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ v30rles79hc9uju7lvg66149j8mhng6djhppx301tnld7oxfgsejwtm4l18t0qxfbgov8k6cfw4cjq6dlfmydythlq5gwhm20r7m15zkxu6n3vmk7k0lez0d4mfio2b3oo1kiwii5xvlk9kqpx872mgwga9bl2kz6amkitk99d64xal86zm0lfqf6o9y4gc7wi3cj96u6018tmfmck8xmip8cxzpie56f2flq8g0jmdunvpkfkp9vy3biesumm5iocadsiaarr8cw0rvjuvk6h3t1wt1lgfrcqxzescwphn00t92oce2v2cuquous432399q5fx5g473c15h09zzprkmy1odtytu50zb62rl85jjug7s79fjs3b4016dtxp01zqypdd250qbzve2eoxa2beby64jsxsurlw39h4nvgoc9bd478rq906dbbno5ega4vjrjzx7qjp1ofz1yubs1iz6xou7cwzk8umya6e8ko3bjh7s36n297jt3jt6c73tse96m0hjtdl5mpw407hdyiubzpnlnc0du8p48ufmmflwj7uiptaxe0999d48zlkiju3z3fackqqa8668w3ze7eda1lw1cqcmcfdnzcjokqb2c9tkmmw4wo1jsntwsigixdcutshmms9txnf0ude80hcnlhl184ccw04f5gmzz26xqu8eiknj6wc75wn78prytygvizv4ubz1sur3imznjr4egh27ceakz2g4sdrq6i43zyocmh7w8uad6gv250jl3eil9yea58jrcivy4sinwfsj1vv5mp4jq2h07bblz3wni3q6ex90xo0h0q7plvs2sm12y8pv4pnuvha2b6j65xuc9g18seax6a9nfjlpv4g8apqrl8as6u98c2c000udwt0oaj4sx23quail1ftkdd7kekga1d6170h7n41wg88wkoobzg9okyd8fqsugr2b8xtkz8cj29lvnu38y5fkolk8ijktz63xob2d9p5k2vk9ysbydxyxts7bxrc1l8vll3iqscn2ry31ogkvec3f3vrz0m2puiws2nn5z3w86uuy4mimvzefr50d11ib828rs78cji4h3ct26o6h3y7nwdt5h0ummy8fpzainlxbrgj380phtp5zb1y6c1q05shkjul4o2ldhgvpmjp731aj0cp29195ilvlgtvzcuq90v1h6ke695rt62bgv0bgqbhmlqq08srloy79idakcl5263ab6iv9tevrysx6xrc3ccy0bs5lvdtjqoqbxrjxadq1j6s1ldnllwdz58yxzl4jrsgqto7kei99dz55qwvh9yyudq9jw4tz8xcwk5erpmpl2hq1athioz9tb8ihutj3bfzc7tamrfzvb3pl25lnwmrfb0z2ezj64dadrmvmr5lt1biz9jsjqh756o59s55zq0opukzgz04nfiyw04hutos5iiao6vp8hswa405bhy2qiy9xct6vmptuekl2pyykv1iinjjadlpzi86ap87qdtflj2xm2jffma38gcn1nda6ar2qdl734bquop6y5jtm8unpyfpnptlqv3vhaxp9iygrmjt2hluc1nvou9kc86h8chlnr4nuq7gv9msitv10or2an8pbcn480l6umthjued71ryakfs5favdnqjx5i4jkuoqbv5a8psxkub9e8trhxutbtdzbt0p3rxmyo54xqpwpecqpahd0eerj61gu3gmyfrc429udunlbzv031ngl0wuux0c0yoplg5sfbz6385cmduhsoh4s2z7zqinxrkbovrdkmpqf0o6bx7dk0o7vt7ns479sy2r2wz3zrvw0kxqb9j8611ggm3797dnlfkimwyrv5lodexb6yzuttufxr1y6frz419dg6kh5gt2dn1z2lssn4wimxgzukowbrx13u6zdvh6g430ktxu14fxx1j0j6pgy2i6es6478v0zdfnqipg7wijneybmbr1im85da5y4gmzxxs3ycn11eygz6h7z11c7dp6jva8cbd8bf08heiuoycvlbf4tb4vrkek9nqszsw0341d62ft3ryyorjb7ia6h43pydq7tek0ynv2w3oiphdt1ysdunqvwd9z6wpy3t50n2gbciaveuwd8djxhrt1l9gh8hwoi6b8hw9tzuuo3zih5hb3wwji3b5q9pg8s4nqi015whfy950jm5on1jnb44rs6d3vy9e5ey146i464lv3apzr2wdyxgac9u3t5zs4nfdv4thzhncwz76c36a1gmh0qhh7m6upkpcuce5roc8w8jpvayr60mta08ajgwb4ehk7vbqwlpz4q3gw9ncuew2myvuvv0c4bpobvpwnyrrj6p19a064dzmhjdtiqc208adsc8dzti0szquxrtgvvofhnitnvsh97lgxcoya8z9da00j0p8op8yedta071fwt4r8ph7nb8d3ygl2oolvfrfs6lxzdhvmhci0h70bu9bz072bk6mwra1y03lhkh1a5r27w38xt1fipnb3jaz36dczp8fkvdn3knui7fc5ghne2nounwgze61apz6mi3th68eg1ucl5zmas0yvtkvd6qd5yupk1fagboyw03jfbj084teo3lav5e5f9odmlreia1qcpln5pqpt8j8sdfkiks6ifzgc4jz2wcik6jxi9q5jjc1ea9n07eh5rig00n2sxr2py1fjue5ol3kjhqwciqbpwvl8a2utaq87t861hv4l7ufvtrhui8f9p3wig93ynueblgyg4t53mmmfcto1eczj6ulnlsxqlssas4u9rx7xyztjh3gh1ambcl66bcvi7b0lzkbnbxstwi1yxg31r2dhoh5kibejjem8zv8kvl2j9k5wn9ndwdyk5d6223hhh7qrpjyw9qyhamvsfnujljmi4qr3yijfzwvedkoj36a5jvetlg1kvu5ljthj04p2mq9epy8rh2c04t4tm503d18yyfjzuvhj7wkx2pk994tb7n2a6euosbdkz5s0hkivfvlnmm7qdz9tr6c9yk3ks78plnrvu1a27hkvd92zudwiyavdjqx2gix68b7rij8n6jxmg04ndgddm2e5ous0aza9nz8ixzabyk0b8mxqsj5z8fzenxn3ol4y2d8f85tdl3ar6xkod0srne5vbqz8drubrev6euabt3revptoyp86n84qphvndtjd5qaw7id2cmdm5idwapelaiq7rnvxggrrpyj25jwrdmgysvzzoxtcksqy7agmgm7r0ttn4cpiww8wgiqsmrsaf9unbrqk8qi8ac1p0017slbttwnomxn6d470b8o36jfdx79zejppskufa55msuhr028z06jf07wvg2k8h2tb9cr8w5fhimzsy1fprl15ia95rku1l100s6kor36u45qko75pfjgvck1q7t0oivzvpdok8ack0zt1bw60ry099mezyxfbn05n478n1km39wro9knqlwlvgh13e9u4r0wd1uvzp1kj4kc0mka7mqowidvdv9cwaxacw5lpzy351vc8xkr4v52464d38q3hjzve02id68mo6zawde1ddcgkru3n7d5tx1qhox81ebr3rq43h23rktia89rbesetyb7008x573q531huaej4x0ss0ggf1791dc2vpucjd7dr4or3wghscnd0qyhbbgi1aa50vf8zl1qrlnc757nzc99qax1h72n8iqshxlu8ph0kap469c33hupc6m5ep386qp0ejyy028ekqh1wezcwpoahei71ff3v2iwcsckmx5rnte6o1hoaupc51z1l9fln0e9o80c782lke2wf1wiozifckycbbb973ytvogvmhk4jfadoqoaq4yec1vzp3q8hm8g2uqnvo4c5ng9vznu5kl6iq3hvdy3wy6w5vqlo6lsin1mv0m81md8jl7p1hk2yffiz0tcm7huy2s908s8acfl7dgmlz7d2uduibfua2v4h0xd8nxap9pbpbuudlhk2i5ay9lfqi84x60eswvt4sc2wgdzttkiav7j40oqhcdm82m7z8asokod0cvpl8hhrloxol3cg53am0hf57c09md65klrp == \v\3\0\r\l\e\s\7\9\h\c\9\u\j\u\7\l\v\g\6\6\1\4\9\j\8\m\h\n\g\6\d\j\h\p\p\x\3\0\1\t\n\l\d\7\o\x\f\g\s\e\j\w\t\m\4\l\1\8\t\0\q\x\f\b\g\o\v\8\k\6\c\f\w\4\c\j\q\6\d\l\f\m\y\d\y\t\h\l\q\5\g\w\h\m\2\0\r\7\m\1\5\z\k\x\u\6\n\3\v\m\k\7\k\0\l\e\z\0\d\4\m\f\i\o\2\b\3\o\o\1\k\i\w\i\i\5\x\v\l\k\9\k\q\p\x\8\7\2\m\g\w\g\a\9\b\l\2\k\z\6\a\m\k\i\t\k\9\9\d\6\4\x\a\l\8\6\z\m\0\l\f\q\f\6\o\9\y\4\g\c\7\w\i\3\c\j\9\6\u\6\0\1\8\t\m\f\m\c\k\8\x\m\i\p\8\c\x\z\p\i\e\5\6\f\2\f\l\q\8\g\0\j\m\d\u\n\v\p\k\f\k\p\9\v\y\3\b\i\e\s\u\m\m\5\i\o\c\a\d\s\i\a\a\r\r\8\c\w\0\r\v\j\u\v\k\6\h\3\t\1\w\t\1\l\g\f\r\c\q\x\z\e\s\c\w\p\h\n\0\0\t\9\2\o\c\e\2\v\2\c\u\q\u\o\u\s\4\3\2\3\9\9\q\5\f\x\5\g\4\7\3\c\1\5\h\0\9\z\z\p\r\k\m\y\1\o\d\t\y\t\u\5\0\z\b\6\2\r\l\8\5\j\j\u\g\7\s\7\9\f\j\s\3\b\4\0\1\6\d\t\x\p\0\1\z\q\y\p\d\d\2\5\0\q\b\z\v\e\2\e\o\x\a\2\b\e\b\y\6\4\j\s\x\s\u\r\l\w\3\9\h\4\n\v\g\o\c\9\b\d\4\7\8\r\q\9\0\6\d\b\b\n\o\5\e\g\a\4\v\j\r\j\z\x\7\q\j\p\1\o\f\z\1\y\u\b\s\1\i\z\6\x\o\u\7\c\w\z\k\8\u\m\y\a\6\e\8\k\o\3\b\j\h\7\s\3\6\n\2\9\7\j\t\3\j\t\6\c\7\3\t\s\e\9\6\m\0\h\j\t\d\l\5\m\p\w\4\0\7\h\d\y\i\u\b\z\p\n\l\n\c\0\d\u\8\p\4\8\u\f\m\m\f\l\w\j\7\u\i\p\t\a\x\e\0\9\9\9\d\4\8\z\l\k\i\j\u\3\z\3\f\a\c\k\q\q\a\8\6\6\8\w\3\z\e\7\e\d\a\1\l\w\1\c\q\c\m\c\f\d\n\z\c\j\o\k\q\b\2\c\9\t\k\m\m\w\4\w\o\1\j\s\n\t\w\s\i\g\i\x\d\c\u\t\s\h\m\m\s\9\t\x\n\f\0\u\d\e\8\0\h\c\n\l\h\l\1\8\4\c\c\w\0\4\f\5\g\m\z\z\2\6\x\q\u\8\e\i\k\n\j\6\w\c\7\5\w\n\7\8\p\r\y\t\y\g\v\i\z\v\4\u\b\z\1\s\u\r\3\i\m\z\n\j\r\4\e\g\h\2\7\c\e\a\k\z\2\g\4\s\d\r\q\6\i\4\3\z\y\o\c\m\h\7\w\8\u\a\d\6\g\v\2\5\0\j\l\3\e\i\l\9\y\e\a\5\8\j\r\c\i\v\y\4\s\i\n\w\f\s\j\1\v\v\5\m\p\4\j\q\2\h\0\7\b\b\l\z\3\w\n\i\3\q\6\e\x\9\0\x\o\0\h\0\q\7\p\l\v\s\2\s\m\1\2\y\8\p\v\4\p\n\u\v\h\a\2\b\6\j\6\5\x\u\c\9\g\1\8\s\e\a\x\6\a\9\n\f\j\l\p\v\4\g\8\a\p\q\r\l\8\a\s\6\u\9\8\c\2\c\0\0\0\u\d\w\t\0\o\a\j\4\s\x\2\3\q\u\a\i\l\1\f\t\k\d\d\7\k\e\k\g\a\1\d\6\1\7\0\h\7\n\4\1\w\g\8\8\w\k\o\o\b\z\g\9\o\k\y\d\8\f\q\s\u\g\r\2\b\8\x\t\k\z\8\c\j\2\9\l\v\n\u\3\8\y\5\f\k\o\l\k\8\i\j\k\t\z\6\3\x\o\b\2\d\9\p\5\k\2\v\k\9\y\s\b\y\d\x\y\x\t\s\7\b\x\r\c\1\l\8\v\l\l\3\i\q\s\c\n\2\r\y\3\1\o\g\k\v\e\c\3\f\3\v\r\z\0\m\2\p\u\i\w\s\2\n\n\5\z\3\w\8\6\u\u\y\4\m\i\m\v\z\e\f\r\5\0\d\1\1\i\b\8\2\8\r\s\7\8\c\j\i\4\h\3\c\t\2\6\o\6\h\3\y\7\n\w\d\t\5\h\0\u\m\m\y\8\f\p\z\a\i\n\l\x\b\r\g\j\3\8\0\p\h\t\p\5\z\b\1\y\6\c\1\q\0\5\s\h\k\j\u\l\4\o\2\l\d\h\g\v\p\m\j\p\7\3\1\a\j\0\c\p\2\9\1\9\5\i\l\v\l\g\t\v\z\c\u\q\9\0\v\1\h\6\k\e\6\9\5\r\t\6\2\b\g\v\0\b\g\q\b\h\m\l\q\q\0\8\s\r\l\o\y\7\9\i\d\a\k\c\l\5\2\6\3\a\b\6\i\v\9\t\e\v\r\y\s\x\6\x\r\c\3\c\c\y\0\b\s\5\l\v\d\t\j\q\o\q\b\x\r\j\x\a\d\q\1\j\6\s\1\l\d\n\l\l\w\d\z\5\8\y\x\z\l\4\j\r\s\g\q\t\o\7\k\e\i\9\9\d\z\5\5\q\w\v\h\9\y\y\u\d\q\9\j\w\4\t\z\8\x\c\w\k\5\e\r\p\m\p\l\2\h\q\1\a\t\h\i\o\z\9\t\b\8\i\h\u\t\j\3\b\f\z\c\7\t\a\m\r\f\z\v\b\3\p\l\2\5\l\n\w\m\r\f\b\0\z\2\e\z\j\6\4\d\a\d\r\m\v\m\r\5\l\t\1\b\i\z\9\j\s\j\q\h\7\5\6\o\5\9\s\5\5\z\q\0\o\p\u\k\z\g\z\0\4\n\f\i\y\w\0\4\h\u\t\o\s\5\i\i\a\o\6\v\p\8\h\s\w\a\4\0\5\b\h\y\2\q\i\y\9\x\c\t\6\v\m\p\t\u\e\k\l\2\p\y\y\k\v\1\i\i\n\j\j\a\d\l\p\z\i\8\6\a\p\8\7\q\d\t\f\l\j\2\x\m\2\j\f\f\m\a\3\8\g\c\n\1\n\d\a\6\a\r\2\q\d\l\7\3\4\b\q\u\o\p\6\y\5\j\t\m\8\u\n\p\y\f\p\n\p\t\l\q\v\3\v\h\a\x\p\9\i\y\g\r\m\j\t\2\h\l\u\c\1\n\v\o\u\9\k\c\8\6\h\8\c\h\l\n\r\4\n\u\q\7\g\v\9\m\s\i\t\v\1\0\o\r\2\a\n\8\p\b\c\n\4\8\0\l\6\u\m\t\h\j\u\e\d\7\1\r\y\a\k\f\s\5\f\a\v\d\n\q\j\x\5\i\4\j\k\u\o\q\b\v\5\a\8\p\s\x\k\u\b\9\e\8\t\r\h\x\u\t\b\t\d\z\b\t\0\p\3\r\x\m\y\o\5\4\x\q\p\w\p\e\c\q\p\a\h\d\0\e\e\r\j\6\1\g\u\3\g\m\y\f\r\c\4\2\9\u\d\u\n\l\b\z\v\0\3\1\n\g\l\0\w\u\u\x\0\c\0\y\o\p\l\g\5\s\f\b\z\6\3\8\5\c\m\d\u\h\s\o\h\4\s\2\z\7\z\q\i\n\x\r\k\b\o\v\r\d\k\m\p\q\f\0\o\6\b\x\7\d\k\0\o\7\v\t\7\n\s\4\7\9\s\y\2\r\2\w\z\3\z\r\v\w\0\k\x\q\b\9\j\8\6\1\1\g\g\m\3\7\9\7\d\n\l\f\k\i\m\w\y\r\v\5\l\o\d\e\x\b\6\y\z\u\t\t\u\f\x\r\1\y\6\f\r\z\4\1\9\d\g\6\k\h\5\g\t\2\d\n\1\z\2\l\s\s\n\4\w\i\m\x\g\z\u\k\o\w\b\r\x\1\3\u\6\z\d\v\h\6\g\4\3\0\k\t\x\u\1\4\f\x\x\1\j\0\j\6\p\g\y\2\i\6\e\s\6\4\7\8\v\0\z\d\f\n\q\i\p\g\7\w\i\j\n\e\y\b\m\b\r\1\i\m\8\5\d\a\5\y\4\g\m\z\x\x\s\3\y\c\n\1\1\e\y\g\z\6\h\7\z\1\1\c\7\d\p\6\j\v\a\8\c\b\d\8\b\f\0\8\h\e\i\u\o\y\c\v\l\b\f\4\t\b\4\v\r\k\e\k\9\n\q\s\z\s\w\0\3\4\1\d\6\2\f\t\3\r\y\y\o\r\j\b\7\i\a\6\h\4\3\p\y\d\q\7\t\e\k\0\y\n\v\2\w\3\o\i\p\h\d\t\1\y\s\d\u\n\q\v\w\d\9\z\6\w\p\y\3\t\5\0\n\2\g\b\c\i\a\v\e\u\w\d\8\d\j\x\h\r\t\1\l\9\g\h\8\h\w\o\i\6\b\8\h\w\9\t\z\u\u\o\3\z\i\h\5\h\b\3\w\w\j\i\3\b\5\q\9\p\g\8\s\4\n\q\i\0\1\5\w\h\f\y\9\5\0\j\m\5\o\n\1\j\n\b\4\4\r\s\6\d\3\v\y\9\e\5\e\y\1\4\6\i\4\6\4\l\v\3\a\p\z\r\2\w\d\y\x\g\a\c\9\u\3\t\5\z\s\4\n\f\d\v\4\t\h\z\h\n\c\w\z\7\6\c\3\6\a\1\g\m\h\0\q\h\h\7\m\6\u\p\k\p\c\u\c\e\5\r\o\c\8\w\8\j\p\v\a\y\r\6\0\m\t\a\0\8\a\j\g\w\b\4\e\h\k\7\v\b\q\w\l\p\z\4\q\3\g\w\9\n\c\u\e\w\2\m\y\v\u\v\v\0\c\4\b\p\o\b\v\p\w\n\y\r\r\j\6\p\1\9\a\0\6\4\d\z\m\h\j\d\t\i\q\c\2\0\8\a\d\s\c\8\d\z\t\i\0\s\z\q\u\x\r\t\g\v\v\o\f\h\n\i\t\n\v\s\h\9\7\l\g\x\c\o\y\a\8\z\9\d\a\0\0\j\0\p\8\o\p\8\y\e\d\t\a\0\7\1\f\w\t\4\r\8\p\h\7\n\b\8\d\3\y\g\l\2\o\o\l\v\f\r\f\s\6\l\x\z\d\h\v\m\h\c\i\0\h\7\0\b\u\9\b\z\0\7\2\b\k\6\m\w\r\a\1\y\0\3\l\h\k\h\1\a\5\r\2\7\w\3\8\x\t\1\f\i\p\n\b\3\j\a\z\3\6\d\c\z\p\8\f\k\v\d\n\3\k\n\u\i\7\f\c\5\g\h\n\e\2\n\o\u\n\w\g\z\e\6\1\a\p\z\6\m\i\3\t\h\6\8\e\g\1\u\c\l\5\z\m\a\s\0\y\v\t\k\v\d\6\q\d\5\y\u\p\k\1\f\a\g\b\o\y\w\0\3\j\f\b\j\0\8\4\t\e\o\3\l\a\v\5\e\5\f\9\o\d\m\l\r\e\i\a\1\q\c\p\l\n\5\p\q\p\t\8\j\8\s\d\f\k\i\k\s\6\i\f\z\g\c\4\j\z\2\w\c\i\k\6\j\x\i\9\q\5\j\j\c\1\e\a\9\n\0\7\e\h\5\r\i\g\0\0\n\2\s\x\r\2\p\y\1\f\j\u\e\5\o\l\3\k\j\h\q\w\c\i\q\b\p\w\v\l\8\a\2\u\t\a\q\8\7\t\8\6\1\h\v\4\l\7\u\f\v\t\r\h\u\i\8\f\9\p\3\w\i\g\9\3\y\n\u\e\b\l\g\y\g\4\t\5\3\m\m\m\f\c\t\o\1\e\c\z\j\6\u\l\n\l\s\x\q\l\s\s\a\s\4\u\9\r\x\7\x\y\z\t\j\h\3\g\h\1\a\m\b\c\l\6\6\b\c\v\i\7\b\0\l\z\k\b\n\b\x\s\t\w\i\1\y\x\g\3\1\r\2\d\h\o\h\5\k\i\b\e\j\j\e\m\8\z\v\8\k\v\l\2\j\9\k\5\w\n\9\n\d\w\d\y\k\5\d\6\2\2\3\h\h\h\7\q\r\p\j\y\w\9\q\y\h\a\m\v\s\f\n\u\j\l\j\m\i\4\q\r\3\y\i\j\f\z\w\v\e\d\k\o\j\3\6\a\5\j\v\e\t\l\g\1\k\v\u\5\l\j\t\h\j\0\4\p\2\m\q\9\e\p\y\8\r\h\2\c\0\4\t\4\t\m\5\0\3\d\1\8\y\y\f\j\z\u\v\h\j\7\w\k\x\2\p\k\9\9\4\t\b\7\n\2\a\6\e\u\o\s\b\d\k\z\5\s\0\h\k\i\v\f\v\l\n\m\m\7\q\d\z\9\t\r\6\c\9\y\k\3\k\s\7\8\p\l\n\r\v\u\1\a\2\7\h\k\v\d\9\2\z\u\d\w\i\y\a\v\d\j\q\x\2\g\i\x\6\8\b\7\r\i\j\8\n\6\j\x\m\g\0\4\n\d\g\d\d\m\2\e\5\o\u\s\0\a\z\a\9\n\z\8\i\x\z\a\b\y\k\0\b\8\m\x\q\s\j\5\z\8\f\z\e\n\x\n\3\o\l\4\y\2\d\8\f\8\5\t\d\l\3\a\r\6\x\k\o\d\0\s\r\n\e\5\v\b\q\z\8\d\r\u\b\r\e\v\6\e\u\a\b\t\3\r\e\v\p\t\o\y\p\8\6\n\8\4\q\p\h\v\n\d\t\j\d\5\q\a\w\7\i\d\2\c\m\d\m\5\i\d\w\a\p\e\l\a\i\q\7\r\n\v\x\g\g\r\r\p\y\j\2\5\j\w\r\d\m\g\y\s\v\z\z\o\x\t\c\k\s\q\y\7\a\g\m\g\m\7\r\0\t\t\n\4\c\p\i\w\w\8\w\g\i\q\s\m\r\s\a\f\9\u\n\b\r\q\k\8\q\i\8\a\c\1\p\0\0\1\7\s\l\b\t\t\w\n\o\m\x\n\6\d\4\7\0\b\8\o\3\6\j\f\d\x\7\9\z\e\j\p\p\s\k\u\f\a\5\5\m\s\u\h\r\0\2\8\z\0\6\j\f\0\7\w\v\g\2\k\8\h\2\t\b\9\c\r\8\w\5\f\h\i\m\z\s\y\1\f\p\r\l\1\5\i\a\9\5\r\k\u\1\l\1\0\0\s\6\k\o\r\3\6\u\4\5\q\k\o\7\5\p\f\j\g\v\c\k\1\q\7\t\0\o\i\v\z\v\p\d\o\k\8\a\c\k\0\z\t\1\b\w\6\0\r\y\0\9\9\m\e\z\y\x\f\b\n\0\5\n\4\7\8\n\1\k\m\3\9\w\r\o\9\k\n\q\l\w\l\v\g\h\1\3\e\9\u\4\r\0\w\d\1\u\v\z\p\1\k\j\4\k\c\0\m\k\a\7\m\q\o\w\i\d\v\d\v\9\c\w\a\x\a\c\w\5\l\p\z\y\3\5\1\v\c\8\x\k\r\4\v\5\2\4\6\4\d\3\8\q\3\h\j\z\v\e\0\2\i\d\6\8\m\o\6\z\a\w\d\e\1\d\d\c\g\k\r\u\3\n\7\d\5\t\x\1\q\h\o\x\8\1\e\b\r\3\r\q\4\3\h\2\3\r\k\t\i\a\8\9\r\b\e\s\e\t\y\b\7\0\0\8\x\5\7\3\q\5\3\1\h\u\a\e\j\4\x\0\s\s\0\g\g\f\1\7\9\1\d\c\2\v\p\u\c\j\d\7\d\r\4\o\r\3\w\g\h\s\c\n\d\0\q\y\h\b\b\g\i\1\a\a\5\0\v\f\8\z\l\1\q\r\l\n\c\7\5\7\n\z\c\9\9\q\a\x\1\h\7\2\n\8\i\q\s\h\x\l\u\8\p\h\0\k\a\p\4\6\9\c\3\3\h\u\p\c\6\m\5\e\p\3\8\6\q\p\0\e\j\y\y\0\2\8\e\k\q\h\1\w\e\z\c\w\p\o\a\h\e\i\7\1\f\f\3\v\2\i\w\c\s\c\k\m\x\5\r\n\t\e\6\o\1\h\o\a\u\p\c\5\1\z\1\l\9\f\l\n\0\e\9\o\8\0\c\7\8\2\l\k\e\2\w\f\1\w\i\o\z\i\f\c\k\y\c\b\b\b\9\7\3\y\t\v\o\g\v\m\h\k\4\j\f\a\d\o\q\o\a\q\4\y\e\c\1\v\z\p\3\q\8\h\m\8\g\2\u\q\n\v\o\4\c\5\n\g\9\v\z\n\u\5\k\l\6\i\q\3\h\v\d\y\3\w\y\6\w\5\v\q\l\o\6\l\s\i\n\1\m\v\0\m\8\1\m\d\8\j\l\7\p\1\h\k\2\y\f\f\i\z\0\t\c\m\7\h\u\y\2\s\9\0\8\s\8\a\c\f\l\7\d\g\m\l\z\7\d\2\u\d\u\i\b\f\u\a\2\v\4\h\0\x\d\8\n\x\a\p\9\p\b\p\b\u\u\d\l\h\k\2\i\5\a\y\9\l\f\q\i\8\4\x\6\0\e\s\w\v\t\4\s\c\2\w\g\d\z\t\t\k\i\a\v\7\j\4\0\o\q\h\c\d\m\8\2\m\7\z\8\a\s\o\k\o\d\0\c\v\p\l\8\h\h\r\l\o\x\o\l\3\c\g\5\3\a\m\0\h\f\5\7\c\0\9\m\d\6\5\k\l\r\p ]] 00:06:57.876 00:06:57.876 real 0m1.366s 00:06:57.876 user 0m0.955s 00:06:57.876 sys 0m0.589s 00:06:57.876 ************************************ 00:06:57.876 END TEST dd_rw_offset 00:06:57.876 ************************************ 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:57.876 22:37:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.136 [2024-07-15 22:37:13.444047] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:58.136 [2024-07-15 22:37:13.444159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62932 ] 00:06:58.136 { 00:06:58.136 "subsystems": [ 00:06:58.136 { 00:06:58.136 "subsystem": "bdev", 00:06:58.136 "config": [ 00:06:58.136 { 00:06:58.136 "params": { 00:06:58.136 "trtype": "pcie", 00:06:58.136 "traddr": "0000:00:10.0", 00:06:58.136 "name": "Nvme0" 00:06:58.136 }, 00:06:58.136 "method": "bdev_nvme_attach_controller" 00:06:58.136 }, 00:06:58.136 { 00:06:58.136 "method": "bdev_wait_for_examine" 00:06:58.136 } 00:06:58.136 ] 00:06:58.136 } 00:06:58.136 ] 00:06:58.136 } 00:06:58.136 [2024-07-15 22:37:13.577141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.136 [2024-07-15 22:37:13.683129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.395 [2024-07-15 22:37:13.737358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.654  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:58.654 00:06:58.654 22:37:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.654 00:06:58.654 real 0m19.499s 00:06:58.654 user 0m14.256s 00:06:58.654 sys 0m6.860s 00:06:58.654 ************************************ 00:06:58.654 END TEST spdk_dd_basic_rw 00:06:58.654 ************************************ 00:06:58.654 22:37:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.654 22:37:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.654 22:37:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:58.654 22:37:14 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:58.654 22:37:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.654 22:37:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.654 22:37:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.654 ************************************ 00:06:58.654 START TEST spdk_dd_posix 00:06:58.654 ************************************ 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:58.654 * Looking for test storage... 00:06:58.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:58.654 * First test run, liburing in use 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:58.654 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.913 ************************************ 00:06:58.913 START TEST dd_flag_append 00:06:58.913 ************************************ 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=lir8etbd6pzz92j8crhxere1kit22aiy 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=cs6dd8ktmbe3pem5i8glfhddccu8m10q 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s lir8etbd6pzz92j8crhxere1kit22aiy 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s cs6dd8ktmbe3pem5i8glfhddccu8m10q 00:06:58.913 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:58.913 [2024-07-15 22:37:14.288521] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:58.913 [2024-07-15 22:37:14.288641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62996 ] 00:06:58.913 [2024-07-15 22:37:14.428359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.172 [2024-07-15 22:37:14.550030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.172 [2024-07-15 22:37:14.603954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.430  Copying: 32/32 [B] (average 31 kBps) 00:06:59.430 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ cs6dd8ktmbe3pem5i8glfhddccu8m10qlir8etbd6pzz92j8crhxere1kit22aiy == \c\s\6\d\d\8\k\t\m\b\e\3\p\e\m\5\i\8\g\l\f\h\d\d\c\c\u\8\m\1\0\q\l\i\r\8\e\t\b\d\6\p\z\z\9\2\j\8\c\r\h\x\e\r\e\1\k\i\t\2\2\a\i\y ]] 00:06:59.430 00:06:59.430 real 0m0.615s 00:06:59.430 user 0m0.368s 00:06:59.430 sys 0m0.261s 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.430 ************************************ 00:06:59.430 END TEST dd_flag_append 00:06:59.430 ************************************ 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.430 ************************************ 00:06:59.430 START TEST dd_flag_directory 00:06:59.430 ************************************ 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.430 22:37:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.430 [2024-07-15 22:37:14.954128] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:06:59.430 [2024-07-15 22:37:14.954235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63020 ] 00:06:59.689 [2024-07-15 22:37:15.093992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.689 [2024-07-15 22:37:15.208243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.948 [2024-07-15 22:37:15.263531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.948 [2024-07-15 22:37:15.295877] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:59.948 [2024-07-15 22:37:15.295929] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:59.948 [2024-07-15 22:37:15.295943] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.948 [2024-07-15 22:37:15.407152] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.948 22:37:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.206 [2024-07-15 22:37:15.562727] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:00.206 [2024-07-15 22:37:15.562835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63034 ] 00:07:00.206 [2024-07-15 22:37:15.697042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.464 [2024-07-15 22:37:15.806430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.464 [2024-07-15 22:37:15.859005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.464 [2024-07-15 22:37:15.889834] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.464 [2024-07-15 22:37:15.889888] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.464 [2024-07-15 22:37:15.889903] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.464 [2024-07-15 22:37:16.004013] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.723 00:07:00.723 real 0m1.205s 00:07:00.723 user 0m0.699s 00:07:00.723 sys 0m0.295s 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:00.723 ************************************ 00:07:00.723 END TEST dd_flag_directory 00:07:00.723 ************************************ 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.723 ************************************ 00:07:00.723 START TEST dd_flag_nofollow 00:07:00.723 ************************************ 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.723 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.723 [2024-07-15 22:37:16.219606] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:00.723 [2024-07-15 22:37:16.219710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63063 ] 00:07:00.981 [2024-07-15 22:37:16.359788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.981 [2024-07-15 22:37:16.505900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.240 [2024-07-15 22:37:16.562524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.240 [2024-07-15 22:37:16.595349] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:01.240 [2024-07-15 22:37:16.595417] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:01.240 [2024-07-15 22:37:16.595449] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.240 [2024-07-15 22:37:16.711212] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:01.240 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.241 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.241 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.241 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.241 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.241 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.499 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.499 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.499 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.499 22:37:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.499 [2024-07-15 22:37:16.856353] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:01.499 [2024-07-15 22:37:16.856466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63072 ] 00:07:01.499 [2024-07-15 22:37:16.992144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.784 [2024-07-15 22:37:17.090392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.784 [2024-07-15 22:37:17.144682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.784 [2024-07-15 22:37:17.178359] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:01.784 [2024-07-15 22:37:17.178413] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:01.784 [2024-07-15 22:37:17.178446] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.784 [2024-07-15 22:37:17.292187] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:02.061 22:37:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.061 [2024-07-15 22:37:17.480120] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:02.061 [2024-07-15 22:37:17.480223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63085 ] 00:07:02.327 [2024-07-15 22:37:17.618432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.327 [2024-07-15 22:37:17.726695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.327 [2024-07-15 22:37:17.783102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.585  Copying: 512/512 [B] (average 500 kBps) 00:07:02.585 00:07:02.585 ************************************ 00:07:02.585 END TEST dd_flag_nofollow 00:07:02.585 ************************************ 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ mx4550b812a7ub374vkzjt4q52p1wfz0j050kfhju0ji35k2fxqege9qlz3a1dzn1sj1x9j95fk2zfmomx2jnw8bj3s70iby0hczf8ux2v63na3wet5q21nsa9dfog1afzkhz886hynlyn5cy95poai2m7r1pk66bh7h0rbezww5ujas22328cg61cksxyjkqua4x93fqj3m2toic99xkbjal70k3lr1laomhm4svr6e4m9drqglw1okr79gx3tqs74elzxe58s5074u3dvu7x8dffl5u6qei6fl28129jlj5gtqn4yccdaa8aa0kpjzhqxnuq02x4h93o797pqtdeklmne3z4tzzj8cb3kbfzjbl0q7dcec4nskpg6fawo6c4aujp6dngscj70omq8w5pjoovjnedp8jb4psq4qr9uls734zu09of86cv7luiosdcefggpoclj8xyjmmwukqj005u5094ajybw2l6xf488e80t6490d1yqt5gefjguu == \m\x\4\5\5\0\b\8\1\2\a\7\u\b\3\7\4\v\k\z\j\t\4\q\5\2\p\1\w\f\z\0\j\0\5\0\k\f\h\j\u\0\j\i\3\5\k\2\f\x\q\e\g\e\9\q\l\z\3\a\1\d\z\n\1\s\j\1\x\9\j\9\5\f\k\2\z\f\m\o\m\x\2\j\n\w\8\b\j\3\s\7\0\i\b\y\0\h\c\z\f\8\u\x\2\v\6\3\n\a\3\w\e\t\5\q\2\1\n\s\a\9\d\f\o\g\1\a\f\z\k\h\z\8\8\6\h\y\n\l\y\n\5\c\y\9\5\p\o\a\i\2\m\7\r\1\p\k\6\6\b\h\7\h\0\r\b\e\z\w\w\5\u\j\a\s\2\2\3\2\8\c\g\6\1\c\k\s\x\y\j\k\q\u\a\4\x\9\3\f\q\j\3\m\2\t\o\i\c\9\9\x\k\b\j\a\l\7\0\k\3\l\r\1\l\a\o\m\h\m\4\s\v\r\6\e\4\m\9\d\r\q\g\l\w\1\o\k\r\7\9\g\x\3\t\q\s\7\4\e\l\z\x\e\5\8\s\5\0\7\4\u\3\d\v\u\7\x\8\d\f\f\l\5\u\6\q\e\i\6\f\l\2\8\1\2\9\j\l\j\5\g\t\q\n\4\y\c\c\d\a\a\8\a\a\0\k\p\j\z\h\q\x\n\u\q\0\2\x\4\h\9\3\o\7\9\7\p\q\t\d\e\k\l\m\n\e\3\z\4\t\z\z\j\8\c\b\3\k\b\f\z\j\b\l\0\q\7\d\c\e\c\4\n\s\k\p\g\6\f\a\w\o\6\c\4\a\u\j\p\6\d\n\g\s\c\j\7\0\o\m\q\8\w\5\p\j\o\o\v\j\n\e\d\p\8\j\b\4\p\s\q\4\q\r\9\u\l\s\7\3\4\z\u\0\9\o\f\8\6\c\v\7\l\u\i\o\s\d\c\e\f\g\g\p\o\c\l\j\8\x\y\j\m\m\w\u\k\q\j\0\0\5\u\5\0\9\4\a\j\y\b\w\2\l\6\x\f\4\8\8\e\8\0\t\6\4\9\0\d\1\y\q\t\5\g\e\f\j\g\u\u ]] 00:07:02.585 00:07:02.585 real 0m1.889s 00:07:02.585 user 0m1.108s 00:07:02.585 sys 0m0.597s 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:02.585 ************************************ 00:07:02.585 START TEST dd_flag_noatime 00:07:02.585 ************************************ 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721083037 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721083038 00:07:02.585 22:37:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:03.961 22:37:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.961 [2024-07-15 22:37:19.172648] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:03.961 [2024-07-15 22:37:19.172956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63130 ] 00:07:03.961 [2024-07-15 22:37:19.315585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.961 [2024-07-15 22:37:19.442480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.961 [2024-07-15 22:37:19.498137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.220  Copying: 512/512 [B] (average 500 kBps) 00:07:04.220 00:07:04.220 22:37:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.220 22:37:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721083037 )) 00:07:04.220 22:37:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.220 22:37:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721083038 )) 00:07:04.220 22:37:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.477 [2024-07-15 22:37:19.802578] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:04.477 [2024-07-15 22:37:19.802691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63143 ] 00:07:04.477 [2024-07-15 22:37:19.941498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.477 [2024-07-15 22:37:20.044777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.734 [2024-07-15 22:37:20.101358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.992  Copying: 512/512 [B] (average 500 kBps) 00:07:04.992 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721083040 )) 00:07:04.992 00:07:04.992 real 0m2.253s 00:07:04.992 user 0m0.704s 00:07:04.992 sys 0m0.582s 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.992 ************************************ 00:07:04.992 END TEST dd_flag_noatime 00:07:04.992 ************************************ 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.992 ************************************ 00:07:04.992 START TEST dd_flags_misc 00:07:04.992 ************************************ 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.992 22:37:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:04.993 [2024-07-15 22:37:20.452446] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:04.993 [2024-07-15 22:37:20.452534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63172 ] 00:07:05.250 [2024-07-15 22:37:20.585321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.250 [2024-07-15 22:37:20.693639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.250 [2024-07-15 22:37:20.749350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.508  Copying: 512/512 [B] (average 500 kBps) 00:07:05.508 00:07:05.508 22:37:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 951l8eks8wqwznadb8tl3llw08c8z157qnx78v9hto7hs6gsd5ikh34zxmz8uiilr3lxzg7smaf07i8y89xpnwmyzgtmejmcftahptcq6f7rky4s0xqweafj8sx7xbumas8oe710u0nvzqn3c212pni392qdz5jm205yi71y91fp0njwg0c93c37p0zzd0gzo45bkb4idvxzlx2knxalgud8xeqtlfe26takbyf8403gyajugaak1xrrs2lk0ndg8mbr65vrhvae00qi1mkbj73ct3kgl1tkc0tgxiqlly5mfadjpzpbr3kkgxjs2u504w5hd3k0f50t35ivxwnv6tm1xivbu3l6nljagxtimxhjyy397uh2gqoq3sp2a7vxf6qboae88vvb2muba2nrz1mffr0xb61aglg3oxoe90ke2g6us82bfaftsl4ay8s2jpzbu3ln538ag2i7pbe8jcd3rorixk5iqn33k72hp8pouz88snl2tr5mje6gqirx == \9\5\1\l\8\e\k\s\8\w\q\w\z\n\a\d\b\8\t\l\3\l\l\w\0\8\c\8\z\1\5\7\q\n\x\7\8\v\9\h\t\o\7\h\s\6\g\s\d\5\i\k\h\3\4\z\x\m\z\8\u\i\i\l\r\3\l\x\z\g\7\s\m\a\f\0\7\i\8\y\8\9\x\p\n\w\m\y\z\g\t\m\e\j\m\c\f\t\a\h\p\t\c\q\6\f\7\r\k\y\4\s\0\x\q\w\e\a\f\j\8\s\x\7\x\b\u\m\a\s\8\o\e\7\1\0\u\0\n\v\z\q\n\3\c\2\1\2\p\n\i\3\9\2\q\d\z\5\j\m\2\0\5\y\i\7\1\y\9\1\f\p\0\n\j\w\g\0\c\9\3\c\3\7\p\0\z\z\d\0\g\z\o\4\5\b\k\b\4\i\d\v\x\z\l\x\2\k\n\x\a\l\g\u\d\8\x\e\q\t\l\f\e\2\6\t\a\k\b\y\f\8\4\0\3\g\y\a\j\u\g\a\a\k\1\x\r\r\s\2\l\k\0\n\d\g\8\m\b\r\6\5\v\r\h\v\a\e\0\0\q\i\1\m\k\b\j\7\3\c\t\3\k\g\l\1\t\k\c\0\t\g\x\i\q\l\l\y\5\m\f\a\d\j\p\z\p\b\r\3\k\k\g\x\j\s\2\u\5\0\4\w\5\h\d\3\k\0\f\5\0\t\3\5\i\v\x\w\n\v\6\t\m\1\x\i\v\b\u\3\l\6\n\l\j\a\g\x\t\i\m\x\h\j\y\y\3\9\7\u\h\2\g\q\o\q\3\s\p\2\a\7\v\x\f\6\q\b\o\a\e\8\8\v\v\b\2\m\u\b\a\2\n\r\z\1\m\f\f\r\0\x\b\6\1\a\g\l\g\3\o\x\o\e\9\0\k\e\2\g\6\u\s\8\2\b\f\a\f\t\s\l\4\a\y\8\s\2\j\p\z\b\u\3\l\n\5\3\8\a\g\2\i\7\p\b\e\8\j\c\d\3\r\o\r\i\x\k\5\i\q\n\3\3\k\7\2\h\p\8\p\o\u\z\8\8\s\n\l\2\t\r\5\m\j\e\6\g\q\i\r\x ]] 00:07:05.508 22:37:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.508 22:37:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.508 [2024-07-15 22:37:21.075439] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:05.508 [2024-07-15 22:37:21.075532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63181 ] 00:07:05.767 [2024-07-15 22:37:21.213774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.767 [2024-07-15 22:37:21.320782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.026 [2024-07-15 22:37:21.376994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.285  Copying: 512/512 [B] (average 500 kBps) 00:07:06.285 00:07:06.285 22:37:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 951l8eks8wqwznadb8tl3llw08c8z157qnx78v9hto7hs6gsd5ikh34zxmz8uiilr3lxzg7smaf07i8y89xpnwmyzgtmejmcftahptcq6f7rky4s0xqweafj8sx7xbumas8oe710u0nvzqn3c212pni392qdz5jm205yi71y91fp0njwg0c93c37p0zzd0gzo45bkb4idvxzlx2knxalgud8xeqtlfe26takbyf8403gyajugaak1xrrs2lk0ndg8mbr65vrhvae00qi1mkbj73ct3kgl1tkc0tgxiqlly5mfadjpzpbr3kkgxjs2u504w5hd3k0f50t35ivxwnv6tm1xivbu3l6nljagxtimxhjyy397uh2gqoq3sp2a7vxf6qboae88vvb2muba2nrz1mffr0xb61aglg3oxoe90ke2g6us82bfaftsl4ay8s2jpzbu3ln538ag2i7pbe8jcd3rorixk5iqn33k72hp8pouz88snl2tr5mje6gqirx == \9\5\1\l\8\e\k\s\8\w\q\w\z\n\a\d\b\8\t\l\3\l\l\w\0\8\c\8\z\1\5\7\q\n\x\7\8\v\9\h\t\o\7\h\s\6\g\s\d\5\i\k\h\3\4\z\x\m\z\8\u\i\i\l\r\3\l\x\z\g\7\s\m\a\f\0\7\i\8\y\8\9\x\p\n\w\m\y\z\g\t\m\e\j\m\c\f\t\a\h\p\t\c\q\6\f\7\r\k\y\4\s\0\x\q\w\e\a\f\j\8\s\x\7\x\b\u\m\a\s\8\o\e\7\1\0\u\0\n\v\z\q\n\3\c\2\1\2\p\n\i\3\9\2\q\d\z\5\j\m\2\0\5\y\i\7\1\y\9\1\f\p\0\n\j\w\g\0\c\9\3\c\3\7\p\0\z\z\d\0\g\z\o\4\5\b\k\b\4\i\d\v\x\z\l\x\2\k\n\x\a\l\g\u\d\8\x\e\q\t\l\f\e\2\6\t\a\k\b\y\f\8\4\0\3\g\y\a\j\u\g\a\a\k\1\x\r\r\s\2\l\k\0\n\d\g\8\m\b\r\6\5\v\r\h\v\a\e\0\0\q\i\1\m\k\b\j\7\3\c\t\3\k\g\l\1\t\k\c\0\t\g\x\i\q\l\l\y\5\m\f\a\d\j\p\z\p\b\r\3\k\k\g\x\j\s\2\u\5\0\4\w\5\h\d\3\k\0\f\5\0\t\3\5\i\v\x\w\n\v\6\t\m\1\x\i\v\b\u\3\l\6\n\l\j\a\g\x\t\i\m\x\h\j\y\y\3\9\7\u\h\2\g\q\o\q\3\s\p\2\a\7\v\x\f\6\q\b\o\a\e\8\8\v\v\b\2\m\u\b\a\2\n\r\z\1\m\f\f\r\0\x\b\6\1\a\g\l\g\3\o\x\o\e\9\0\k\e\2\g\6\u\s\8\2\b\f\a\f\t\s\l\4\a\y\8\s\2\j\p\z\b\u\3\l\n\5\3\8\a\g\2\i\7\p\b\e\8\j\c\d\3\r\o\r\i\x\k\5\i\q\n\3\3\k\7\2\h\p\8\p\o\u\z\8\8\s\n\l\2\t\r\5\m\j\e\6\g\q\i\r\x ]] 00:07:06.285 22:37:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.285 22:37:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:06.285 [2024-07-15 22:37:21.695531] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:06.285 [2024-07-15 22:37:21.695647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:07:06.285 [2024-07-15 22:37:21.830569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.544 [2024-07-15 22:37:21.934831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.544 [2024-07-15 22:37:21.991232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.805  Copying: 512/512 [B] (average 125 kBps) 00:07:06.805 00:07:06.806 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 951l8eks8wqwznadb8tl3llw08c8z157qnx78v9hto7hs6gsd5ikh34zxmz8uiilr3lxzg7smaf07i8y89xpnwmyzgtmejmcftahptcq6f7rky4s0xqweafj8sx7xbumas8oe710u0nvzqn3c212pni392qdz5jm205yi71y91fp0njwg0c93c37p0zzd0gzo45bkb4idvxzlx2knxalgud8xeqtlfe26takbyf8403gyajugaak1xrrs2lk0ndg8mbr65vrhvae00qi1mkbj73ct3kgl1tkc0tgxiqlly5mfadjpzpbr3kkgxjs2u504w5hd3k0f50t35ivxwnv6tm1xivbu3l6nljagxtimxhjyy397uh2gqoq3sp2a7vxf6qboae88vvb2muba2nrz1mffr0xb61aglg3oxoe90ke2g6us82bfaftsl4ay8s2jpzbu3ln538ag2i7pbe8jcd3rorixk5iqn33k72hp8pouz88snl2tr5mje6gqirx == \9\5\1\l\8\e\k\s\8\w\q\w\z\n\a\d\b\8\t\l\3\l\l\w\0\8\c\8\z\1\5\7\q\n\x\7\8\v\9\h\t\o\7\h\s\6\g\s\d\5\i\k\h\3\4\z\x\m\z\8\u\i\i\l\r\3\l\x\z\g\7\s\m\a\f\0\7\i\8\y\8\9\x\p\n\w\m\y\z\g\t\m\e\j\m\c\f\t\a\h\p\t\c\q\6\f\7\r\k\y\4\s\0\x\q\w\e\a\f\j\8\s\x\7\x\b\u\m\a\s\8\o\e\7\1\0\u\0\n\v\z\q\n\3\c\2\1\2\p\n\i\3\9\2\q\d\z\5\j\m\2\0\5\y\i\7\1\y\9\1\f\p\0\n\j\w\g\0\c\9\3\c\3\7\p\0\z\z\d\0\g\z\o\4\5\b\k\b\4\i\d\v\x\z\l\x\2\k\n\x\a\l\g\u\d\8\x\e\q\t\l\f\e\2\6\t\a\k\b\y\f\8\4\0\3\g\y\a\j\u\g\a\a\k\1\x\r\r\s\2\l\k\0\n\d\g\8\m\b\r\6\5\v\r\h\v\a\e\0\0\q\i\1\m\k\b\j\7\3\c\t\3\k\g\l\1\t\k\c\0\t\g\x\i\q\l\l\y\5\m\f\a\d\j\p\z\p\b\r\3\k\k\g\x\j\s\2\u\5\0\4\w\5\h\d\3\k\0\f\5\0\t\3\5\i\v\x\w\n\v\6\t\m\1\x\i\v\b\u\3\l\6\n\l\j\a\g\x\t\i\m\x\h\j\y\y\3\9\7\u\h\2\g\q\o\q\3\s\p\2\a\7\v\x\f\6\q\b\o\a\e\8\8\v\v\b\2\m\u\b\a\2\n\r\z\1\m\f\f\r\0\x\b\6\1\a\g\l\g\3\o\x\o\e\9\0\k\e\2\g\6\u\s\8\2\b\f\a\f\t\s\l\4\a\y\8\s\2\j\p\z\b\u\3\l\n\5\3\8\a\g\2\i\7\p\b\e\8\j\c\d\3\r\o\r\i\x\k\5\i\q\n\3\3\k\7\2\h\p\8\p\o\u\z\8\8\s\n\l\2\t\r\5\m\j\e\6\g\q\i\r\x ]] 00:07:06.806 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.806 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.806 [2024-07-15 22:37:22.297256] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:06.806 [2024-07-15 22:37:22.297354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63206 ] 00:07:07.064 [2024-07-15 22:37:22.433357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.064 [2024-07-15 22:37:22.529163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.064 [2024-07-15 22:37:22.583020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.323  Copying: 512/512 [B] (average 500 kBps) 00:07:07.323 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 951l8eks8wqwznadb8tl3llw08c8z157qnx78v9hto7hs6gsd5ikh34zxmz8uiilr3lxzg7smaf07i8y89xpnwmyzgtmejmcftahptcq6f7rky4s0xqweafj8sx7xbumas8oe710u0nvzqn3c212pni392qdz5jm205yi71y91fp0njwg0c93c37p0zzd0gzo45bkb4idvxzlx2knxalgud8xeqtlfe26takbyf8403gyajugaak1xrrs2lk0ndg8mbr65vrhvae00qi1mkbj73ct3kgl1tkc0tgxiqlly5mfadjpzpbr3kkgxjs2u504w5hd3k0f50t35ivxwnv6tm1xivbu3l6nljagxtimxhjyy397uh2gqoq3sp2a7vxf6qboae88vvb2muba2nrz1mffr0xb61aglg3oxoe90ke2g6us82bfaftsl4ay8s2jpzbu3ln538ag2i7pbe8jcd3rorixk5iqn33k72hp8pouz88snl2tr5mje6gqirx == \9\5\1\l\8\e\k\s\8\w\q\w\z\n\a\d\b\8\t\l\3\l\l\w\0\8\c\8\z\1\5\7\q\n\x\7\8\v\9\h\t\o\7\h\s\6\g\s\d\5\i\k\h\3\4\z\x\m\z\8\u\i\i\l\r\3\l\x\z\g\7\s\m\a\f\0\7\i\8\y\8\9\x\p\n\w\m\y\z\g\t\m\e\j\m\c\f\t\a\h\p\t\c\q\6\f\7\r\k\y\4\s\0\x\q\w\e\a\f\j\8\s\x\7\x\b\u\m\a\s\8\o\e\7\1\0\u\0\n\v\z\q\n\3\c\2\1\2\p\n\i\3\9\2\q\d\z\5\j\m\2\0\5\y\i\7\1\y\9\1\f\p\0\n\j\w\g\0\c\9\3\c\3\7\p\0\z\z\d\0\g\z\o\4\5\b\k\b\4\i\d\v\x\z\l\x\2\k\n\x\a\l\g\u\d\8\x\e\q\t\l\f\e\2\6\t\a\k\b\y\f\8\4\0\3\g\y\a\j\u\g\a\a\k\1\x\r\r\s\2\l\k\0\n\d\g\8\m\b\r\6\5\v\r\h\v\a\e\0\0\q\i\1\m\k\b\j\7\3\c\t\3\k\g\l\1\t\k\c\0\t\g\x\i\q\l\l\y\5\m\f\a\d\j\p\z\p\b\r\3\k\k\g\x\j\s\2\u\5\0\4\w\5\h\d\3\k\0\f\5\0\t\3\5\i\v\x\w\n\v\6\t\m\1\x\i\v\b\u\3\l\6\n\l\j\a\g\x\t\i\m\x\h\j\y\y\3\9\7\u\h\2\g\q\o\q\3\s\p\2\a\7\v\x\f\6\q\b\o\a\e\8\8\v\v\b\2\m\u\b\a\2\n\r\z\1\m\f\f\r\0\x\b\6\1\a\g\l\g\3\o\x\o\e\9\0\k\e\2\g\6\u\s\8\2\b\f\a\f\t\s\l\4\a\y\8\s\2\j\p\z\b\u\3\l\n\5\3\8\a\g\2\i\7\p\b\e\8\j\c\d\3\r\o\r\i\x\k\5\i\q\n\3\3\k\7\2\h\p\8\p\o\u\z\8\8\s\n\l\2\t\r\5\m\j\e\6\g\q\i\r\x ]] 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.323 22:37:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:07.323 [2024-07-15 22:37:22.887015] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:07.323 [2024-07-15 22:37:22.887252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63217 ] 00:07:07.583 [2024-07-15 22:37:23.020481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.583 [2024-07-15 22:37:23.121080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.851 [2024-07-15 22:37:23.175610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.110  Copying: 512/512 [B] (average 500 kBps) 00:07:08.110 00:07:08.110 22:37:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7x91l1037jr2ydtxd3iesb4nb12um0qn338zb6l316zfcyabo4dk05n3irtlmfomngaud9bhygzk7bn1duf8wum5dwq2nhnvlful9rfw0ec3f8ba412tfx7rlci80mqcu73rgg26hmev09kfepiyppbb04k8qoqgdabut0w68l2z90r5em3i58zy8wnbg9cobcsg3vnjt5pu4tgiccirqyncfbl2li9g1tsqfsda3r2dp4owamh0qvgxpuuba11pt84huf2a2qkz4c9jy5rzj6dx3shk2n9bio0ezzfe4ckb7xpseh995051o6x52b9x9a6c0yifr3trq8hd9lsllbe42n2pozh2qr01it32sbey6ydz4ulroqoqa9867pp6mx464gow5e2uqtcyo3pvs450057c8eryhz0oe9n6ln23onpd9t5y7mswplx78dwku3652t0dsphte1q50ekhrojtddw5ln4qp9h5pk7cd7fc8k34dbw3q49ces9yc755 == \7\x\9\1\l\1\0\3\7\j\r\2\y\d\t\x\d\3\i\e\s\b\4\n\b\1\2\u\m\0\q\n\3\3\8\z\b\6\l\3\1\6\z\f\c\y\a\b\o\4\d\k\0\5\n\3\i\r\t\l\m\f\o\m\n\g\a\u\d\9\b\h\y\g\z\k\7\b\n\1\d\u\f\8\w\u\m\5\d\w\q\2\n\h\n\v\l\f\u\l\9\r\f\w\0\e\c\3\f\8\b\a\4\1\2\t\f\x\7\r\l\c\i\8\0\m\q\c\u\7\3\r\g\g\2\6\h\m\e\v\0\9\k\f\e\p\i\y\p\p\b\b\0\4\k\8\q\o\q\g\d\a\b\u\t\0\w\6\8\l\2\z\9\0\r\5\e\m\3\i\5\8\z\y\8\w\n\b\g\9\c\o\b\c\s\g\3\v\n\j\t\5\p\u\4\t\g\i\c\c\i\r\q\y\n\c\f\b\l\2\l\i\9\g\1\t\s\q\f\s\d\a\3\r\2\d\p\4\o\w\a\m\h\0\q\v\g\x\p\u\u\b\a\1\1\p\t\8\4\h\u\f\2\a\2\q\k\z\4\c\9\j\y\5\r\z\j\6\d\x\3\s\h\k\2\n\9\b\i\o\0\e\z\z\f\e\4\c\k\b\7\x\p\s\e\h\9\9\5\0\5\1\o\6\x\5\2\b\9\x\9\a\6\c\0\y\i\f\r\3\t\r\q\8\h\d\9\l\s\l\l\b\e\4\2\n\2\p\o\z\h\2\q\r\0\1\i\t\3\2\s\b\e\y\6\y\d\z\4\u\l\r\o\q\o\q\a\9\8\6\7\p\p\6\m\x\4\6\4\g\o\w\5\e\2\u\q\t\c\y\o\3\p\v\s\4\5\0\0\5\7\c\8\e\r\y\h\z\0\o\e\9\n\6\l\n\2\3\o\n\p\d\9\t\5\y\7\m\s\w\p\l\x\7\8\d\w\k\u\3\6\5\2\t\0\d\s\p\h\t\e\1\q\5\0\e\k\h\r\o\j\t\d\d\w\5\l\n\4\q\p\9\h\5\p\k\7\c\d\7\f\c\8\k\3\4\d\b\w\3\q\4\9\c\e\s\9\y\c\7\5\5 ]] 00:07:08.110 22:37:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.110 22:37:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:08.110 [2024-07-15 22:37:23.493439] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:08.110 [2024-07-15 22:37:23.493551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63227 ] 00:07:08.110 [2024-07-15 22:37:23.631716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.369 [2024-07-15 22:37:23.735171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.369 [2024-07-15 22:37:23.790321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.627  Copying: 512/512 [B] (average 500 kBps) 00:07:08.627 00:07:08.627 22:37:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7x91l1037jr2ydtxd3iesb4nb12um0qn338zb6l316zfcyabo4dk05n3irtlmfomngaud9bhygzk7bn1duf8wum5dwq2nhnvlful9rfw0ec3f8ba412tfx7rlci80mqcu73rgg26hmev09kfepiyppbb04k8qoqgdabut0w68l2z90r5em3i58zy8wnbg9cobcsg3vnjt5pu4tgiccirqyncfbl2li9g1tsqfsda3r2dp4owamh0qvgxpuuba11pt84huf2a2qkz4c9jy5rzj6dx3shk2n9bio0ezzfe4ckb7xpseh995051o6x52b9x9a6c0yifr3trq8hd9lsllbe42n2pozh2qr01it32sbey6ydz4ulroqoqa9867pp6mx464gow5e2uqtcyo3pvs450057c8eryhz0oe9n6ln23onpd9t5y7mswplx78dwku3652t0dsphte1q50ekhrojtddw5ln4qp9h5pk7cd7fc8k34dbw3q49ces9yc755 == \7\x\9\1\l\1\0\3\7\j\r\2\y\d\t\x\d\3\i\e\s\b\4\n\b\1\2\u\m\0\q\n\3\3\8\z\b\6\l\3\1\6\z\f\c\y\a\b\o\4\d\k\0\5\n\3\i\r\t\l\m\f\o\m\n\g\a\u\d\9\b\h\y\g\z\k\7\b\n\1\d\u\f\8\w\u\m\5\d\w\q\2\n\h\n\v\l\f\u\l\9\r\f\w\0\e\c\3\f\8\b\a\4\1\2\t\f\x\7\r\l\c\i\8\0\m\q\c\u\7\3\r\g\g\2\6\h\m\e\v\0\9\k\f\e\p\i\y\p\p\b\b\0\4\k\8\q\o\q\g\d\a\b\u\t\0\w\6\8\l\2\z\9\0\r\5\e\m\3\i\5\8\z\y\8\w\n\b\g\9\c\o\b\c\s\g\3\v\n\j\t\5\p\u\4\t\g\i\c\c\i\r\q\y\n\c\f\b\l\2\l\i\9\g\1\t\s\q\f\s\d\a\3\r\2\d\p\4\o\w\a\m\h\0\q\v\g\x\p\u\u\b\a\1\1\p\t\8\4\h\u\f\2\a\2\q\k\z\4\c\9\j\y\5\r\z\j\6\d\x\3\s\h\k\2\n\9\b\i\o\0\e\z\z\f\e\4\c\k\b\7\x\p\s\e\h\9\9\5\0\5\1\o\6\x\5\2\b\9\x\9\a\6\c\0\y\i\f\r\3\t\r\q\8\h\d\9\l\s\l\l\b\e\4\2\n\2\p\o\z\h\2\q\r\0\1\i\t\3\2\s\b\e\y\6\y\d\z\4\u\l\r\o\q\o\q\a\9\8\6\7\p\p\6\m\x\4\6\4\g\o\w\5\e\2\u\q\t\c\y\o\3\p\v\s\4\5\0\0\5\7\c\8\e\r\y\h\z\0\o\e\9\n\6\l\n\2\3\o\n\p\d\9\t\5\y\7\m\s\w\p\l\x\7\8\d\w\k\u\3\6\5\2\t\0\d\s\p\h\t\e\1\q\5\0\e\k\h\r\o\j\t\d\d\w\5\l\n\4\q\p\9\h\5\p\k\7\c\d\7\f\c\8\k\3\4\d\b\w\3\q\4\9\c\e\s\9\y\c\7\5\5 ]] 00:07:08.627 22:37:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.627 22:37:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:08.627 [2024-07-15 22:37:24.118107] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:08.627 [2024-07-15 22:37:24.118211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63236 ] 00:07:08.887 [2024-07-15 22:37:24.257775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.887 [2024-07-15 22:37:24.366112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.887 [2024-07-15 22:37:24.423724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.146  Copying: 512/512 [B] (average 250 kBps) 00:07:09.146 00:07:09.146 22:37:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7x91l1037jr2ydtxd3iesb4nb12um0qn338zb6l316zfcyabo4dk05n3irtlmfomngaud9bhygzk7bn1duf8wum5dwq2nhnvlful9rfw0ec3f8ba412tfx7rlci80mqcu73rgg26hmev09kfepiyppbb04k8qoqgdabut0w68l2z90r5em3i58zy8wnbg9cobcsg3vnjt5pu4tgiccirqyncfbl2li9g1tsqfsda3r2dp4owamh0qvgxpuuba11pt84huf2a2qkz4c9jy5rzj6dx3shk2n9bio0ezzfe4ckb7xpseh995051o6x52b9x9a6c0yifr3trq8hd9lsllbe42n2pozh2qr01it32sbey6ydz4ulroqoqa9867pp6mx464gow5e2uqtcyo3pvs450057c8eryhz0oe9n6ln23onpd9t5y7mswplx78dwku3652t0dsphte1q50ekhrojtddw5ln4qp9h5pk7cd7fc8k34dbw3q49ces9yc755 == \7\x\9\1\l\1\0\3\7\j\r\2\y\d\t\x\d\3\i\e\s\b\4\n\b\1\2\u\m\0\q\n\3\3\8\z\b\6\l\3\1\6\z\f\c\y\a\b\o\4\d\k\0\5\n\3\i\r\t\l\m\f\o\m\n\g\a\u\d\9\b\h\y\g\z\k\7\b\n\1\d\u\f\8\w\u\m\5\d\w\q\2\n\h\n\v\l\f\u\l\9\r\f\w\0\e\c\3\f\8\b\a\4\1\2\t\f\x\7\r\l\c\i\8\0\m\q\c\u\7\3\r\g\g\2\6\h\m\e\v\0\9\k\f\e\p\i\y\p\p\b\b\0\4\k\8\q\o\q\g\d\a\b\u\t\0\w\6\8\l\2\z\9\0\r\5\e\m\3\i\5\8\z\y\8\w\n\b\g\9\c\o\b\c\s\g\3\v\n\j\t\5\p\u\4\t\g\i\c\c\i\r\q\y\n\c\f\b\l\2\l\i\9\g\1\t\s\q\f\s\d\a\3\r\2\d\p\4\o\w\a\m\h\0\q\v\g\x\p\u\u\b\a\1\1\p\t\8\4\h\u\f\2\a\2\q\k\z\4\c\9\j\y\5\r\z\j\6\d\x\3\s\h\k\2\n\9\b\i\o\0\e\z\z\f\e\4\c\k\b\7\x\p\s\e\h\9\9\5\0\5\1\o\6\x\5\2\b\9\x\9\a\6\c\0\y\i\f\r\3\t\r\q\8\h\d\9\l\s\l\l\b\e\4\2\n\2\p\o\z\h\2\q\r\0\1\i\t\3\2\s\b\e\y\6\y\d\z\4\u\l\r\o\q\o\q\a\9\8\6\7\p\p\6\m\x\4\6\4\g\o\w\5\e\2\u\q\t\c\y\o\3\p\v\s\4\5\0\0\5\7\c\8\e\r\y\h\z\0\o\e\9\n\6\l\n\2\3\o\n\p\d\9\t\5\y\7\m\s\w\p\l\x\7\8\d\w\k\u\3\6\5\2\t\0\d\s\p\h\t\e\1\q\5\0\e\k\h\r\o\j\t\d\d\w\5\l\n\4\q\p\9\h\5\p\k\7\c\d\7\f\c\8\k\3\4\d\b\w\3\q\4\9\c\e\s\9\y\c\7\5\5 ]] 00:07:09.147 22:37:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.147 22:37:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:09.405 [2024-07-15 22:37:24.751157] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:09.406 [2024-07-15 22:37:24.751258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:07:09.406 [2024-07-15 22:37:24.891553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.664 [2024-07-15 22:37:25.022076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.664 [2024-07-15 22:37:25.082767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.923  Copying: 512/512 [B] (average 250 kBps) 00:07:09.924 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7x91l1037jr2ydtxd3iesb4nb12um0qn338zb6l316zfcyabo4dk05n3irtlmfomngaud9bhygzk7bn1duf8wum5dwq2nhnvlful9rfw0ec3f8ba412tfx7rlci80mqcu73rgg26hmev09kfepiyppbb04k8qoqgdabut0w68l2z90r5em3i58zy8wnbg9cobcsg3vnjt5pu4tgiccirqyncfbl2li9g1tsqfsda3r2dp4owamh0qvgxpuuba11pt84huf2a2qkz4c9jy5rzj6dx3shk2n9bio0ezzfe4ckb7xpseh995051o6x52b9x9a6c0yifr3trq8hd9lsllbe42n2pozh2qr01it32sbey6ydz4ulroqoqa9867pp6mx464gow5e2uqtcyo3pvs450057c8eryhz0oe9n6ln23onpd9t5y7mswplx78dwku3652t0dsphte1q50ekhrojtddw5ln4qp9h5pk7cd7fc8k34dbw3q49ces9yc755 == \7\x\9\1\l\1\0\3\7\j\r\2\y\d\t\x\d\3\i\e\s\b\4\n\b\1\2\u\m\0\q\n\3\3\8\z\b\6\l\3\1\6\z\f\c\y\a\b\o\4\d\k\0\5\n\3\i\r\t\l\m\f\o\m\n\g\a\u\d\9\b\h\y\g\z\k\7\b\n\1\d\u\f\8\w\u\m\5\d\w\q\2\n\h\n\v\l\f\u\l\9\r\f\w\0\e\c\3\f\8\b\a\4\1\2\t\f\x\7\r\l\c\i\8\0\m\q\c\u\7\3\r\g\g\2\6\h\m\e\v\0\9\k\f\e\p\i\y\p\p\b\b\0\4\k\8\q\o\q\g\d\a\b\u\t\0\w\6\8\l\2\z\9\0\r\5\e\m\3\i\5\8\z\y\8\w\n\b\g\9\c\o\b\c\s\g\3\v\n\j\t\5\p\u\4\t\g\i\c\c\i\r\q\y\n\c\f\b\l\2\l\i\9\g\1\t\s\q\f\s\d\a\3\r\2\d\p\4\o\w\a\m\h\0\q\v\g\x\p\u\u\b\a\1\1\p\t\8\4\h\u\f\2\a\2\q\k\z\4\c\9\j\y\5\r\z\j\6\d\x\3\s\h\k\2\n\9\b\i\o\0\e\z\z\f\e\4\c\k\b\7\x\p\s\e\h\9\9\5\0\5\1\o\6\x\5\2\b\9\x\9\a\6\c\0\y\i\f\r\3\t\r\q\8\h\d\9\l\s\l\l\b\e\4\2\n\2\p\o\z\h\2\q\r\0\1\i\t\3\2\s\b\e\y\6\y\d\z\4\u\l\r\o\q\o\q\a\9\8\6\7\p\p\6\m\x\4\6\4\g\o\w\5\e\2\u\q\t\c\y\o\3\p\v\s\4\5\0\0\5\7\c\8\e\r\y\h\z\0\o\e\9\n\6\l\n\2\3\o\n\p\d\9\t\5\y\7\m\s\w\p\l\x\7\8\d\w\k\u\3\6\5\2\t\0\d\s\p\h\t\e\1\q\5\0\e\k\h\r\o\j\t\d\d\w\5\l\n\4\q\p\9\h\5\p\k\7\c\d\7\f\c\8\k\3\4\d\b\w\3\q\4\9\c\e\s\9\y\c\7\5\5 ]] 00:07:09.924 00:07:09.924 real 0m4.950s 00:07:09.924 user 0m2.867s 00:07:09.924 sys 0m2.279s 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.924 ************************************ 00:07:09.924 END TEST dd_flags_misc 00:07:09.924 ************************************ 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:09.924 * Second test run, disabling liburing, forcing AIO 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.924 ************************************ 00:07:09.924 START TEST dd_flag_append_forced_aio 00:07:09.924 ************************************ 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=5vf2jbmqq4q8urhsed1lqf11lx6nipc9 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=8s6lm1k0w4baidd6k19ku7mnbhr2v5dn 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 5vf2jbmqq4q8urhsed1lqf11lx6nipc9 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 8s6lm1k0w4baidd6k19ku7mnbhr2v5dn 00:07:09.924 22:37:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:09.924 [2024-07-15 22:37:25.462318] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:09.924 [2024-07-15 22:37:25.462418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:07:10.183 [2024-07-15 22:37:25.601603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.183 [2024-07-15 22:37:25.687393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.183 [2024-07-15 22:37:25.744765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.700  Copying: 32/32 [B] (average 31 kBps) 00:07:10.700 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 8s6lm1k0w4baidd6k19ku7mnbhr2v5dn5vf2jbmqq4q8urhsed1lqf11lx6nipc9 == \8\s\6\l\m\1\k\0\w\4\b\a\i\d\d\6\k\1\9\k\u\7\m\n\b\h\r\2\v\5\d\n\5\v\f\2\j\b\m\q\q\4\q\8\u\r\h\s\e\d\1\l\q\f\1\1\l\x\6\n\i\p\c\9 ]] 00:07:10.700 00:07:10.700 real 0m0.637s 00:07:10.700 user 0m0.361s 00:07:10.700 sys 0m0.148s 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 ************************************ 00:07:10.700 END TEST dd_flag_append_forced_aio 00:07:10.700 ************************************ 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 ************************************ 00:07:10.700 START TEST dd_flag_directory_forced_aio 00:07:10.700 ************************************ 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.700 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.700 [2024-07-15 22:37:26.148769] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:10.700 [2024-07-15 22:37:26.148866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63306 ] 00:07:10.960 [2024-07-15 22:37:26.286936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.960 [2024-07-15 22:37:26.392034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.960 [2024-07-15 22:37:26.449477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.960 [2024-07-15 22:37:26.482758] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:10.960 [2024-07-15 22:37:26.482822] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:10.960 [2024-07-15 22:37:26.482852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.273 [2024-07-15 22:37:26.596589] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.273 22:37:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:11.273 [2024-07-15 22:37:26.751709] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:11.273 [2024-07-15 22:37:26.751810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63316 ] 00:07:11.531 [2024-07-15 22:37:26.890813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.531 [2024-07-15 22:37:26.996253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.531 [2024-07-15 22:37:27.051097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.531 [2024-07-15 22:37:27.083788] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:11.531 [2024-07-15 22:37:27.083840] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:11.531 [2024-07-15 22:37:27.083855] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.791 [2024-07-15 22:37:27.200687] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.791 00:07:11.791 real 0m1.210s 00:07:11.791 user 0m0.700s 00:07:11.791 sys 0m0.300s 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.791 ************************************ 00:07:11.791 END TEST dd_flag_directory_forced_aio 00:07:11.791 ************************************ 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:11.791 ************************************ 00:07:11.791 START TEST dd_flag_nofollow_forced_aio 00:07:11.791 ************************************ 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:11.791 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:12.051 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.051 [2024-07-15 22:37:27.424988] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:12.051 [2024-07-15 22:37:27.425097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63350 ] 00:07:12.051 [2024-07-15 22:37:27.562488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.310 [2024-07-15 22:37:27.670419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.310 [2024-07-15 22:37:27.725309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.310 [2024-07-15 22:37:27.758122] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:12.310 [2024-07-15 22:37:27.758174] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:12.310 [2024-07-15 22:37:27.758205] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.310 [2024-07-15 22:37:27.874215] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:12.570 22:37:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:12.570 [2024-07-15 22:37:28.020007] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:12.570 [2024-07-15 22:37:28.020101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:07:12.829 [2024-07-15 22:37:28.152668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.829 [2024-07-15 22:37:28.258350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.829 [2024-07-15 22:37:28.314188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.829 [2024-07-15 22:37:28.346069] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:12.829 [2024-07-15 22:37:28.346121] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:12.829 [2024-07-15 22:37:28.346153] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.089 [2024-07-15 22:37:28.462233] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:13.089 22:37:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.089 [2024-07-15 22:37:28.622330] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:13.089 [2024-07-15 22:37:28.622467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63367 ] 00:07:13.348 [2024-07-15 22:37:28.758506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.348 [2024-07-15 22:37:28.854293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.348 [2024-07-15 22:37:28.911283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.866  Copying: 512/512 [B] (average 500 kBps) 00:07:13.866 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ysh35qm7aqetbwgthmypiq1gox9zluejanwe4isxux7o5yncafaohn3luyv8iqpam40abukk25t2mx4mknz1ek92q5mbfbcc8y0n88txvqnil3whoqh20avm8li14afnt8gil9bvey5nmg1t0oqpq21t29q6ijlghwn3arbyynxfbthvseob1jdq5pykxez0q1z86o9a5wwukplg6nr2bro4gtmj0a6hsqtrapa183uvscpxjxh45xmb3hupk1vfrwrfg0kydmvi6gy1zejcqypassdhie19t5xc04jkx708n6nv58cne9javyzum49armdyz46abdol3bh2eu5gvj3tzrs5bj3g4a4yoiu2nxiedcuigcmtgi1rrsml7ttenhao3hluj083scax2xfl7k2q4b6sfn4cgpuw6a7irz08rptum5nl35d6xrampcojz7q6rbjfty854o3rh5kjh1qfsvvksido8k7w95iaf9qalv14vs1ee9vifou7a10h == \y\s\h\3\5\q\m\7\a\q\e\t\b\w\g\t\h\m\y\p\i\q\1\g\o\x\9\z\l\u\e\j\a\n\w\e\4\i\s\x\u\x\7\o\5\y\n\c\a\f\a\o\h\n\3\l\u\y\v\8\i\q\p\a\m\4\0\a\b\u\k\k\2\5\t\2\m\x\4\m\k\n\z\1\e\k\9\2\q\5\m\b\f\b\c\c\8\y\0\n\8\8\t\x\v\q\n\i\l\3\w\h\o\q\h\2\0\a\v\m\8\l\i\1\4\a\f\n\t\8\g\i\l\9\b\v\e\y\5\n\m\g\1\t\0\o\q\p\q\2\1\t\2\9\q\6\i\j\l\g\h\w\n\3\a\r\b\y\y\n\x\f\b\t\h\v\s\e\o\b\1\j\d\q\5\p\y\k\x\e\z\0\q\1\z\8\6\o\9\a\5\w\w\u\k\p\l\g\6\n\r\2\b\r\o\4\g\t\m\j\0\a\6\h\s\q\t\r\a\p\a\1\8\3\u\v\s\c\p\x\j\x\h\4\5\x\m\b\3\h\u\p\k\1\v\f\r\w\r\f\g\0\k\y\d\m\v\i\6\g\y\1\z\e\j\c\q\y\p\a\s\s\d\h\i\e\1\9\t\5\x\c\0\4\j\k\x\7\0\8\n\6\n\v\5\8\c\n\e\9\j\a\v\y\z\u\m\4\9\a\r\m\d\y\z\4\6\a\b\d\o\l\3\b\h\2\e\u\5\g\v\j\3\t\z\r\s\5\b\j\3\g\4\a\4\y\o\i\u\2\n\x\i\e\d\c\u\i\g\c\m\t\g\i\1\r\r\s\m\l\7\t\t\e\n\h\a\o\3\h\l\u\j\0\8\3\s\c\a\x\2\x\f\l\7\k\2\q\4\b\6\s\f\n\4\c\g\p\u\w\6\a\7\i\r\z\0\8\r\p\t\u\m\5\n\l\3\5\d\6\x\r\a\m\p\c\o\j\z\7\q\6\r\b\j\f\t\y\8\5\4\o\3\r\h\5\k\j\h\1\q\f\s\v\v\k\s\i\d\o\8\k\7\w\9\5\i\a\f\9\q\a\l\v\1\4\v\s\1\e\e\9\v\i\f\o\u\7\a\1\0\h ]] 00:07:13.866 00:07:13.866 real 0m1.830s 00:07:13.866 user 0m1.056s 00:07:13.866 sys 0m0.443s 00:07:13.866 ************************************ 00:07:13.866 END TEST dd_flag_nofollow_forced_aio 00:07:13.866 ************************************ 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:13.866 ************************************ 00:07:13.866 START TEST dd_flag_noatime_forced_aio 00:07:13.866 ************************************ 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721083048 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.866 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721083049 00:07:13.867 22:37:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:14.802 22:37:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.802 [2024-07-15 22:37:30.301338] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:14.802 [2024-07-15 22:37:30.301454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63413 ] 00:07:15.061 [2024-07-15 22:37:30.437754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.061 [2024-07-15 22:37:30.552717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.061 [2024-07-15 22:37:30.607377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.583  Copying: 512/512 [B] (average 500 kBps) 00:07:15.583 00:07:15.583 22:37:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.583 22:37:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721083048 )) 00:07:15.583 22:37:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.583 22:37:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721083049 )) 00:07:15.583 22:37:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.583 [2024-07-15 22:37:30.961053] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:15.583 [2024-07-15 22:37:30.961197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63419 ] 00:07:15.583 [2024-07-15 22:37:31.100816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.850 [2024-07-15 22:37:31.202668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.850 [2024-07-15 22:37:31.256821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.108  Copying: 512/512 [B] (average 500 kBps) 00:07:16.108 00:07:16.108 22:37:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:16.108 22:37:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721083051 )) 00:07:16.109 00:07:16.109 real 0m2.315s 00:07:16.109 user 0m0.744s 00:07:16.109 sys 0m0.329s 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:16.109 ************************************ 00:07:16.109 END TEST dd_flag_noatime_forced_aio 00:07:16.109 ************************************ 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:16.109 ************************************ 00:07:16.109 START TEST dd_flags_misc_forced_aio 00:07:16.109 ************************************ 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.109 22:37:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:16.109 [2024-07-15 22:37:31.655006] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:16.109 [2024-07-15 22:37:31.655121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63451 ] 00:07:16.368 [2024-07-15 22:37:31.790656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.368 [2024-07-15 22:37:31.903954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.627 [2024-07-15 22:37:31.960233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.885  Copying: 512/512 [B] (average 500 kBps) 00:07:16.885 00:07:16.885 22:37:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9s8glgay18zupj505obkekiwzevoexzs5logdre2iyr77tewc989kng0gkw8l2aqrgmmgavj26w6m5o9mhzf0zcf1iiv21sb4atvkh52xmdcsnvv5rh9z82205mzxpuiof0qpgxx2hk4idzrp5sjherqetbgvq0mcho75kvegyekcdy4hdbac2oa8xb08bxwogu7kbnrsam9u91lglk0f9rztqqgjub5rid0y5tsdkxdw9vp7ab8bq5hz23licgztqm6g7xcza6924ap8ypbalio683q9uqclacw02cjogbs355hon4gldifdy3wy7lyip8qce9aunnl2ndxl04d5ytxbjr4899xjqu02nzqlv94z6ead7o9bk7puk406mvicd62sq1f8fq08ki84mc9a6fnrqgn9cnrzdnw1envb3cxjhcpb5h6i8n0vl1ks9xj1v388zpdnswkg700gfmoamznyozjzonhk297cdxc0o3sftllba2llylg4itlvmzq == \9\s\8\g\l\g\a\y\1\8\z\u\p\j\5\0\5\o\b\k\e\k\i\w\z\e\v\o\e\x\z\s\5\l\o\g\d\r\e\2\i\y\r\7\7\t\e\w\c\9\8\9\k\n\g\0\g\k\w\8\l\2\a\q\r\g\m\m\g\a\v\j\2\6\w\6\m\5\o\9\m\h\z\f\0\z\c\f\1\i\i\v\2\1\s\b\4\a\t\v\k\h\5\2\x\m\d\c\s\n\v\v\5\r\h\9\z\8\2\2\0\5\m\z\x\p\u\i\o\f\0\q\p\g\x\x\2\h\k\4\i\d\z\r\p\5\s\j\h\e\r\q\e\t\b\g\v\q\0\m\c\h\o\7\5\k\v\e\g\y\e\k\c\d\y\4\h\d\b\a\c\2\o\a\8\x\b\0\8\b\x\w\o\g\u\7\k\b\n\r\s\a\m\9\u\9\1\l\g\l\k\0\f\9\r\z\t\q\q\g\j\u\b\5\r\i\d\0\y\5\t\s\d\k\x\d\w\9\v\p\7\a\b\8\b\q\5\h\z\2\3\l\i\c\g\z\t\q\m\6\g\7\x\c\z\a\6\9\2\4\a\p\8\y\p\b\a\l\i\o\6\8\3\q\9\u\q\c\l\a\c\w\0\2\c\j\o\g\b\s\3\5\5\h\o\n\4\g\l\d\i\f\d\y\3\w\y\7\l\y\i\p\8\q\c\e\9\a\u\n\n\l\2\n\d\x\l\0\4\d\5\y\t\x\b\j\r\4\8\9\9\x\j\q\u\0\2\n\z\q\l\v\9\4\z\6\e\a\d\7\o\9\b\k\7\p\u\k\4\0\6\m\v\i\c\d\6\2\s\q\1\f\8\f\q\0\8\k\i\8\4\m\c\9\a\6\f\n\r\q\g\n\9\c\n\r\z\d\n\w\1\e\n\v\b\3\c\x\j\h\c\p\b\5\h\6\i\8\n\0\v\l\1\k\s\9\x\j\1\v\3\8\8\z\p\d\n\s\w\k\g\7\0\0\g\f\m\o\a\m\z\n\y\o\z\j\z\o\n\h\k\2\9\7\c\d\x\c\0\o\3\s\f\t\l\l\b\a\2\l\l\y\l\g\4\i\t\l\v\m\z\q ]] 00:07:16.885 22:37:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.885 22:37:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:16.885 [2024-07-15 22:37:32.288240] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:16.885 [2024-07-15 22:37:32.288354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63458 ] 00:07:16.885 [2024-07-15 22:37:32.425512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.144 [2024-07-15 22:37:32.534607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.144 [2024-07-15 22:37:32.591339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.402  Copying: 512/512 [B] (average 500 kBps) 00:07:17.402 00:07:17.402 22:37:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9s8glgay18zupj505obkekiwzevoexzs5logdre2iyr77tewc989kng0gkw8l2aqrgmmgavj26w6m5o9mhzf0zcf1iiv21sb4atvkh52xmdcsnvv5rh9z82205mzxpuiof0qpgxx2hk4idzrp5sjherqetbgvq0mcho75kvegyekcdy4hdbac2oa8xb08bxwogu7kbnrsam9u91lglk0f9rztqqgjub5rid0y5tsdkxdw9vp7ab8bq5hz23licgztqm6g7xcza6924ap8ypbalio683q9uqclacw02cjogbs355hon4gldifdy3wy7lyip8qce9aunnl2ndxl04d5ytxbjr4899xjqu02nzqlv94z6ead7o9bk7puk406mvicd62sq1f8fq08ki84mc9a6fnrqgn9cnrzdnw1envb3cxjhcpb5h6i8n0vl1ks9xj1v388zpdnswkg700gfmoamznyozjzonhk297cdxc0o3sftllba2llylg4itlvmzq == \9\s\8\g\l\g\a\y\1\8\z\u\p\j\5\0\5\o\b\k\e\k\i\w\z\e\v\o\e\x\z\s\5\l\o\g\d\r\e\2\i\y\r\7\7\t\e\w\c\9\8\9\k\n\g\0\g\k\w\8\l\2\a\q\r\g\m\m\g\a\v\j\2\6\w\6\m\5\o\9\m\h\z\f\0\z\c\f\1\i\i\v\2\1\s\b\4\a\t\v\k\h\5\2\x\m\d\c\s\n\v\v\5\r\h\9\z\8\2\2\0\5\m\z\x\p\u\i\o\f\0\q\p\g\x\x\2\h\k\4\i\d\z\r\p\5\s\j\h\e\r\q\e\t\b\g\v\q\0\m\c\h\o\7\5\k\v\e\g\y\e\k\c\d\y\4\h\d\b\a\c\2\o\a\8\x\b\0\8\b\x\w\o\g\u\7\k\b\n\r\s\a\m\9\u\9\1\l\g\l\k\0\f\9\r\z\t\q\q\g\j\u\b\5\r\i\d\0\y\5\t\s\d\k\x\d\w\9\v\p\7\a\b\8\b\q\5\h\z\2\3\l\i\c\g\z\t\q\m\6\g\7\x\c\z\a\6\9\2\4\a\p\8\y\p\b\a\l\i\o\6\8\3\q\9\u\q\c\l\a\c\w\0\2\c\j\o\g\b\s\3\5\5\h\o\n\4\g\l\d\i\f\d\y\3\w\y\7\l\y\i\p\8\q\c\e\9\a\u\n\n\l\2\n\d\x\l\0\4\d\5\y\t\x\b\j\r\4\8\9\9\x\j\q\u\0\2\n\z\q\l\v\9\4\z\6\e\a\d\7\o\9\b\k\7\p\u\k\4\0\6\m\v\i\c\d\6\2\s\q\1\f\8\f\q\0\8\k\i\8\4\m\c\9\a\6\f\n\r\q\g\n\9\c\n\r\z\d\n\w\1\e\n\v\b\3\c\x\j\h\c\p\b\5\h\6\i\8\n\0\v\l\1\k\s\9\x\j\1\v\3\8\8\z\p\d\n\s\w\k\g\7\0\0\g\f\m\o\a\m\z\n\y\o\z\j\z\o\n\h\k\2\9\7\c\d\x\c\0\o\3\s\f\t\l\l\b\a\2\l\l\y\l\g\4\i\t\l\v\m\z\q ]] 00:07:17.402 22:37:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:17.402 22:37:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:17.402 [2024-07-15 22:37:32.917250] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:17.402 [2024-07-15 22:37:32.917351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63466 ] 00:07:17.661 [2024-07-15 22:37:33.054714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.661 [2024-07-15 22:37:33.154122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.661 [2024-07-15 22:37:33.209795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.926  Copying: 512/512 [B] (average 166 kBps) 00:07:17.927 00:07:17.927 22:37:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9s8glgay18zupj505obkekiwzevoexzs5logdre2iyr77tewc989kng0gkw8l2aqrgmmgavj26w6m5o9mhzf0zcf1iiv21sb4atvkh52xmdcsnvv5rh9z82205mzxpuiof0qpgxx2hk4idzrp5sjherqetbgvq0mcho75kvegyekcdy4hdbac2oa8xb08bxwogu7kbnrsam9u91lglk0f9rztqqgjub5rid0y5tsdkxdw9vp7ab8bq5hz23licgztqm6g7xcza6924ap8ypbalio683q9uqclacw02cjogbs355hon4gldifdy3wy7lyip8qce9aunnl2ndxl04d5ytxbjr4899xjqu02nzqlv94z6ead7o9bk7puk406mvicd62sq1f8fq08ki84mc9a6fnrqgn9cnrzdnw1envb3cxjhcpb5h6i8n0vl1ks9xj1v388zpdnswkg700gfmoamznyozjzonhk297cdxc0o3sftllba2llylg4itlvmzq == \9\s\8\g\l\g\a\y\1\8\z\u\p\j\5\0\5\o\b\k\e\k\i\w\z\e\v\o\e\x\z\s\5\l\o\g\d\r\e\2\i\y\r\7\7\t\e\w\c\9\8\9\k\n\g\0\g\k\w\8\l\2\a\q\r\g\m\m\g\a\v\j\2\6\w\6\m\5\o\9\m\h\z\f\0\z\c\f\1\i\i\v\2\1\s\b\4\a\t\v\k\h\5\2\x\m\d\c\s\n\v\v\5\r\h\9\z\8\2\2\0\5\m\z\x\p\u\i\o\f\0\q\p\g\x\x\2\h\k\4\i\d\z\r\p\5\s\j\h\e\r\q\e\t\b\g\v\q\0\m\c\h\o\7\5\k\v\e\g\y\e\k\c\d\y\4\h\d\b\a\c\2\o\a\8\x\b\0\8\b\x\w\o\g\u\7\k\b\n\r\s\a\m\9\u\9\1\l\g\l\k\0\f\9\r\z\t\q\q\g\j\u\b\5\r\i\d\0\y\5\t\s\d\k\x\d\w\9\v\p\7\a\b\8\b\q\5\h\z\2\3\l\i\c\g\z\t\q\m\6\g\7\x\c\z\a\6\9\2\4\a\p\8\y\p\b\a\l\i\o\6\8\3\q\9\u\q\c\l\a\c\w\0\2\c\j\o\g\b\s\3\5\5\h\o\n\4\g\l\d\i\f\d\y\3\w\y\7\l\y\i\p\8\q\c\e\9\a\u\n\n\l\2\n\d\x\l\0\4\d\5\y\t\x\b\j\r\4\8\9\9\x\j\q\u\0\2\n\z\q\l\v\9\4\z\6\e\a\d\7\o\9\b\k\7\p\u\k\4\0\6\m\v\i\c\d\6\2\s\q\1\f\8\f\q\0\8\k\i\8\4\m\c\9\a\6\f\n\r\q\g\n\9\c\n\r\z\d\n\w\1\e\n\v\b\3\c\x\j\h\c\p\b\5\h\6\i\8\n\0\v\l\1\k\s\9\x\j\1\v\3\8\8\z\p\d\n\s\w\k\g\7\0\0\g\f\m\o\a\m\z\n\y\o\z\j\z\o\n\h\k\2\9\7\c\d\x\c\0\o\3\s\f\t\l\l\b\a\2\l\l\y\l\g\4\i\t\l\v\m\z\q ]] 00:07:17.927 22:37:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:17.927 22:37:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:18.188 [2024-07-15 22:37:33.525289] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:18.188 [2024-07-15 22:37:33.525407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63479 ] 00:07:18.188 [2024-07-15 22:37:33.662407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.445 [2024-07-15 22:37:33.768095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.445 [2024-07-15 22:37:33.822600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.704  Copying: 512/512 [B] (average 500 kBps) 00:07:18.704 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9s8glgay18zupj505obkekiwzevoexzs5logdre2iyr77tewc989kng0gkw8l2aqrgmmgavj26w6m5o9mhzf0zcf1iiv21sb4atvkh52xmdcsnvv5rh9z82205mzxpuiof0qpgxx2hk4idzrp5sjherqetbgvq0mcho75kvegyekcdy4hdbac2oa8xb08bxwogu7kbnrsam9u91lglk0f9rztqqgjub5rid0y5tsdkxdw9vp7ab8bq5hz23licgztqm6g7xcza6924ap8ypbalio683q9uqclacw02cjogbs355hon4gldifdy3wy7lyip8qce9aunnl2ndxl04d5ytxbjr4899xjqu02nzqlv94z6ead7o9bk7puk406mvicd62sq1f8fq08ki84mc9a6fnrqgn9cnrzdnw1envb3cxjhcpb5h6i8n0vl1ks9xj1v388zpdnswkg700gfmoamznyozjzonhk297cdxc0o3sftllba2llylg4itlvmzq == \9\s\8\g\l\g\a\y\1\8\z\u\p\j\5\0\5\o\b\k\e\k\i\w\z\e\v\o\e\x\z\s\5\l\o\g\d\r\e\2\i\y\r\7\7\t\e\w\c\9\8\9\k\n\g\0\g\k\w\8\l\2\a\q\r\g\m\m\g\a\v\j\2\6\w\6\m\5\o\9\m\h\z\f\0\z\c\f\1\i\i\v\2\1\s\b\4\a\t\v\k\h\5\2\x\m\d\c\s\n\v\v\5\r\h\9\z\8\2\2\0\5\m\z\x\p\u\i\o\f\0\q\p\g\x\x\2\h\k\4\i\d\z\r\p\5\s\j\h\e\r\q\e\t\b\g\v\q\0\m\c\h\o\7\5\k\v\e\g\y\e\k\c\d\y\4\h\d\b\a\c\2\o\a\8\x\b\0\8\b\x\w\o\g\u\7\k\b\n\r\s\a\m\9\u\9\1\l\g\l\k\0\f\9\r\z\t\q\q\g\j\u\b\5\r\i\d\0\y\5\t\s\d\k\x\d\w\9\v\p\7\a\b\8\b\q\5\h\z\2\3\l\i\c\g\z\t\q\m\6\g\7\x\c\z\a\6\9\2\4\a\p\8\y\p\b\a\l\i\o\6\8\3\q\9\u\q\c\l\a\c\w\0\2\c\j\o\g\b\s\3\5\5\h\o\n\4\g\l\d\i\f\d\y\3\w\y\7\l\y\i\p\8\q\c\e\9\a\u\n\n\l\2\n\d\x\l\0\4\d\5\y\t\x\b\j\r\4\8\9\9\x\j\q\u\0\2\n\z\q\l\v\9\4\z\6\e\a\d\7\o\9\b\k\7\p\u\k\4\0\6\m\v\i\c\d\6\2\s\q\1\f\8\f\q\0\8\k\i\8\4\m\c\9\a\6\f\n\r\q\g\n\9\c\n\r\z\d\n\w\1\e\n\v\b\3\c\x\j\h\c\p\b\5\h\6\i\8\n\0\v\l\1\k\s\9\x\j\1\v\3\8\8\z\p\d\n\s\w\k\g\7\0\0\g\f\m\o\a\m\z\n\y\o\z\j\z\o\n\h\k\2\9\7\c\d\x\c\0\o\3\s\f\t\l\l\b\a\2\l\l\y\l\g\4\i\t\l\v\m\z\q ]] 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:18.704 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:18.704 [2024-07-15 22:37:34.164515] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:18.704 [2024-07-15 22:37:34.164646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63481 ] 00:07:18.962 [2024-07-15 22:37:34.301553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.962 [2024-07-15 22:37:34.410176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.962 [2024-07-15 22:37:34.463336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.220  Copying: 512/512 [B] (average 500 kBps) 00:07:19.220 00:07:19.220 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hx8hquicjl1gnssekhvayeg8pyzouauu5oaou9tm8yhnzo3l8fo5o2plrqe87hprmvpevq7qdt3bf0emnarayt8nt4g33gb6g6e3yej1s40b9bmknx1nl0d23qthbfbvooxqu4c3bo6i14mr62ylzq000mr3fh169jl717zr908gmgfbvuj4zycc5icaxsj7is071v5ldnucqmhvvpn55vfwu91ddsp1zoqtldpugpvls5ypxbn4i1kguodn15neno696xgbmfxqowo3p12kpirgu7a7xqwbhj0edf1zylh8mjin3wce73kdvyu01n66g8gilycg9mxwhomxsvg1vrm4efqh5bg8frbf7omxzx20qrsoreqbygxru3xbt77jep4ib78obdh7s5dit84xf3kgumu1tm2a3lxidn3h4gddjth87v0rfzt5l5gf5sz2cu6r37dk664qkir2m8bg9wjss0ia6q8xu10u1jle2i2o2lxmctykhy87uu4dshse == \h\x\8\h\q\u\i\c\j\l\1\g\n\s\s\e\k\h\v\a\y\e\g\8\p\y\z\o\u\a\u\u\5\o\a\o\u\9\t\m\8\y\h\n\z\o\3\l\8\f\o\5\o\2\p\l\r\q\e\8\7\h\p\r\m\v\p\e\v\q\7\q\d\t\3\b\f\0\e\m\n\a\r\a\y\t\8\n\t\4\g\3\3\g\b\6\g\6\e\3\y\e\j\1\s\4\0\b\9\b\m\k\n\x\1\n\l\0\d\2\3\q\t\h\b\f\b\v\o\o\x\q\u\4\c\3\b\o\6\i\1\4\m\r\6\2\y\l\z\q\0\0\0\m\r\3\f\h\1\6\9\j\l\7\1\7\z\r\9\0\8\g\m\g\f\b\v\u\j\4\z\y\c\c\5\i\c\a\x\s\j\7\i\s\0\7\1\v\5\l\d\n\u\c\q\m\h\v\v\p\n\5\5\v\f\w\u\9\1\d\d\s\p\1\z\o\q\t\l\d\p\u\g\p\v\l\s\5\y\p\x\b\n\4\i\1\k\g\u\o\d\n\1\5\n\e\n\o\6\9\6\x\g\b\m\f\x\q\o\w\o\3\p\1\2\k\p\i\r\g\u\7\a\7\x\q\w\b\h\j\0\e\d\f\1\z\y\l\h\8\m\j\i\n\3\w\c\e\7\3\k\d\v\y\u\0\1\n\6\6\g\8\g\i\l\y\c\g\9\m\x\w\h\o\m\x\s\v\g\1\v\r\m\4\e\f\q\h\5\b\g\8\f\r\b\f\7\o\m\x\z\x\2\0\q\r\s\o\r\e\q\b\y\g\x\r\u\3\x\b\t\7\7\j\e\p\4\i\b\7\8\o\b\d\h\7\s\5\d\i\t\8\4\x\f\3\k\g\u\m\u\1\t\m\2\a\3\l\x\i\d\n\3\h\4\g\d\d\j\t\h\8\7\v\0\r\f\z\t\5\l\5\g\f\5\s\z\2\c\u\6\r\3\7\d\k\6\6\4\q\k\i\r\2\m\8\b\g\9\w\j\s\s\0\i\a\6\q\8\x\u\1\0\u\1\j\l\e\2\i\2\o\2\l\x\m\c\t\y\k\h\y\8\7\u\u\4\d\s\h\s\e ]] 00:07:19.220 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.220 22:37:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:19.478 [2024-07-15 22:37:34.804142] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:19.478 [2024-07-15 22:37:34.804271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63494 ] 00:07:19.478 [2024-07-15 22:37:34.941524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.738 [2024-07-15 22:37:35.053692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.738 [2024-07-15 22:37:35.107458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.997  Copying: 512/512 [B] (average 500 kBps) 00:07:19.997 00:07:19.997 22:37:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hx8hquicjl1gnssekhvayeg8pyzouauu5oaou9tm8yhnzo3l8fo5o2plrqe87hprmvpevq7qdt3bf0emnarayt8nt4g33gb6g6e3yej1s40b9bmknx1nl0d23qthbfbvooxqu4c3bo6i14mr62ylzq000mr3fh169jl717zr908gmgfbvuj4zycc5icaxsj7is071v5ldnucqmhvvpn55vfwu91ddsp1zoqtldpugpvls5ypxbn4i1kguodn15neno696xgbmfxqowo3p12kpirgu7a7xqwbhj0edf1zylh8mjin3wce73kdvyu01n66g8gilycg9mxwhomxsvg1vrm4efqh5bg8frbf7omxzx20qrsoreqbygxru3xbt77jep4ib78obdh7s5dit84xf3kgumu1tm2a3lxidn3h4gddjth87v0rfzt5l5gf5sz2cu6r37dk664qkir2m8bg9wjss0ia6q8xu10u1jle2i2o2lxmctykhy87uu4dshse == \h\x\8\h\q\u\i\c\j\l\1\g\n\s\s\e\k\h\v\a\y\e\g\8\p\y\z\o\u\a\u\u\5\o\a\o\u\9\t\m\8\y\h\n\z\o\3\l\8\f\o\5\o\2\p\l\r\q\e\8\7\h\p\r\m\v\p\e\v\q\7\q\d\t\3\b\f\0\e\m\n\a\r\a\y\t\8\n\t\4\g\3\3\g\b\6\g\6\e\3\y\e\j\1\s\4\0\b\9\b\m\k\n\x\1\n\l\0\d\2\3\q\t\h\b\f\b\v\o\o\x\q\u\4\c\3\b\o\6\i\1\4\m\r\6\2\y\l\z\q\0\0\0\m\r\3\f\h\1\6\9\j\l\7\1\7\z\r\9\0\8\g\m\g\f\b\v\u\j\4\z\y\c\c\5\i\c\a\x\s\j\7\i\s\0\7\1\v\5\l\d\n\u\c\q\m\h\v\v\p\n\5\5\v\f\w\u\9\1\d\d\s\p\1\z\o\q\t\l\d\p\u\g\p\v\l\s\5\y\p\x\b\n\4\i\1\k\g\u\o\d\n\1\5\n\e\n\o\6\9\6\x\g\b\m\f\x\q\o\w\o\3\p\1\2\k\p\i\r\g\u\7\a\7\x\q\w\b\h\j\0\e\d\f\1\z\y\l\h\8\m\j\i\n\3\w\c\e\7\3\k\d\v\y\u\0\1\n\6\6\g\8\g\i\l\y\c\g\9\m\x\w\h\o\m\x\s\v\g\1\v\r\m\4\e\f\q\h\5\b\g\8\f\r\b\f\7\o\m\x\z\x\2\0\q\r\s\o\r\e\q\b\y\g\x\r\u\3\x\b\t\7\7\j\e\p\4\i\b\7\8\o\b\d\h\7\s\5\d\i\t\8\4\x\f\3\k\g\u\m\u\1\t\m\2\a\3\l\x\i\d\n\3\h\4\g\d\d\j\t\h\8\7\v\0\r\f\z\t\5\l\5\g\f\5\s\z\2\c\u\6\r\3\7\d\k\6\6\4\q\k\i\r\2\m\8\b\g\9\w\j\s\s\0\i\a\6\q\8\x\u\1\0\u\1\j\l\e\2\i\2\o\2\l\x\m\c\t\y\k\h\y\8\7\u\u\4\d\s\h\s\e ]] 00:07:19.997 22:37:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.997 22:37:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:19.997 [2024-07-15 22:37:35.428871] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:19.997 [2024-07-15 22:37:35.428983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63507 ] 00:07:19.997 [2024-07-15 22:37:35.558771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.255 [2024-07-15 22:37:35.668687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.255 [2024-07-15 22:37:35.722268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.513  Copying: 512/512 [B] (average 500 kBps) 00:07:20.513 00:07:20.513 22:37:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hx8hquicjl1gnssekhvayeg8pyzouauu5oaou9tm8yhnzo3l8fo5o2plrqe87hprmvpevq7qdt3bf0emnarayt8nt4g33gb6g6e3yej1s40b9bmknx1nl0d23qthbfbvooxqu4c3bo6i14mr62ylzq000mr3fh169jl717zr908gmgfbvuj4zycc5icaxsj7is071v5ldnucqmhvvpn55vfwu91ddsp1zoqtldpugpvls5ypxbn4i1kguodn15neno696xgbmfxqowo3p12kpirgu7a7xqwbhj0edf1zylh8mjin3wce73kdvyu01n66g8gilycg9mxwhomxsvg1vrm4efqh5bg8frbf7omxzx20qrsoreqbygxru3xbt77jep4ib78obdh7s5dit84xf3kgumu1tm2a3lxidn3h4gddjth87v0rfzt5l5gf5sz2cu6r37dk664qkir2m8bg9wjss0ia6q8xu10u1jle2i2o2lxmctykhy87uu4dshse == \h\x\8\h\q\u\i\c\j\l\1\g\n\s\s\e\k\h\v\a\y\e\g\8\p\y\z\o\u\a\u\u\5\o\a\o\u\9\t\m\8\y\h\n\z\o\3\l\8\f\o\5\o\2\p\l\r\q\e\8\7\h\p\r\m\v\p\e\v\q\7\q\d\t\3\b\f\0\e\m\n\a\r\a\y\t\8\n\t\4\g\3\3\g\b\6\g\6\e\3\y\e\j\1\s\4\0\b\9\b\m\k\n\x\1\n\l\0\d\2\3\q\t\h\b\f\b\v\o\o\x\q\u\4\c\3\b\o\6\i\1\4\m\r\6\2\y\l\z\q\0\0\0\m\r\3\f\h\1\6\9\j\l\7\1\7\z\r\9\0\8\g\m\g\f\b\v\u\j\4\z\y\c\c\5\i\c\a\x\s\j\7\i\s\0\7\1\v\5\l\d\n\u\c\q\m\h\v\v\p\n\5\5\v\f\w\u\9\1\d\d\s\p\1\z\o\q\t\l\d\p\u\g\p\v\l\s\5\y\p\x\b\n\4\i\1\k\g\u\o\d\n\1\5\n\e\n\o\6\9\6\x\g\b\m\f\x\q\o\w\o\3\p\1\2\k\p\i\r\g\u\7\a\7\x\q\w\b\h\j\0\e\d\f\1\z\y\l\h\8\m\j\i\n\3\w\c\e\7\3\k\d\v\y\u\0\1\n\6\6\g\8\g\i\l\y\c\g\9\m\x\w\h\o\m\x\s\v\g\1\v\r\m\4\e\f\q\h\5\b\g\8\f\r\b\f\7\o\m\x\z\x\2\0\q\r\s\o\r\e\q\b\y\g\x\r\u\3\x\b\t\7\7\j\e\p\4\i\b\7\8\o\b\d\h\7\s\5\d\i\t\8\4\x\f\3\k\g\u\m\u\1\t\m\2\a\3\l\x\i\d\n\3\h\4\g\d\d\j\t\h\8\7\v\0\r\f\z\t\5\l\5\g\f\5\s\z\2\c\u\6\r\3\7\d\k\6\6\4\q\k\i\r\2\m\8\b\g\9\w\j\s\s\0\i\a\6\q\8\x\u\1\0\u\1\j\l\e\2\i\2\o\2\l\x\m\c\t\y\k\h\y\8\7\u\u\4\d\s\h\s\e ]] 00:07:20.513 22:37:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.513 22:37:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:20.513 [2024-07-15 22:37:36.069091] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:20.513 [2024-07-15 22:37:36.069206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63509 ] 00:07:20.772 [2024-07-15 22:37:36.208385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.772 [2024-07-15 22:37:36.308762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.031 [2024-07-15 22:37:36.362506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.289  Copying: 512/512 [B] (average 500 kBps) 00:07:21.289 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hx8hquicjl1gnssekhvayeg8pyzouauu5oaou9tm8yhnzo3l8fo5o2plrqe87hprmvpevq7qdt3bf0emnarayt8nt4g33gb6g6e3yej1s40b9bmknx1nl0d23qthbfbvooxqu4c3bo6i14mr62ylzq000mr3fh169jl717zr908gmgfbvuj4zycc5icaxsj7is071v5ldnucqmhvvpn55vfwu91ddsp1zoqtldpugpvls5ypxbn4i1kguodn15neno696xgbmfxqowo3p12kpirgu7a7xqwbhj0edf1zylh8mjin3wce73kdvyu01n66g8gilycg9mxwhomxsvg1vrm4efqh5bg8frbf7omxzx20qrsoreqbygxru3xbt77jep4ib78obdh7s5dit84xf3kgumu1tm2a3lxidn3h4gddjth87v0rfzt5l5gf5sz2cu6r37dk664qkir2m8bg9wjss0ia6q8xu10u1jle2i2o2lxmctykhy87uu4dshse == \h\x\8\h\q\u\i\c\j\l\1\g\n\s\s\e\k\h\v\a\y\e\g\8\p\y\z\o\u\a\u\u\5\o\a\o\u\9\t\m\8\y\h\n\z\o\3\l\8\f\o\5\o\2\p\l\r\q\e\8\7\h\p\r\m\v\p\e\v\q\7\q\d\t\3\b\f\0\e\m\n\a\r\a\y\t\8\n\t\4\g\3\3\g\b\6\g\6\e\3\y\e\j\1\s\4\0\b\9\b\m\k\n\x\1\n\l\0\d\2\3\q\t\h\b\f\b\v\o\o\x\q\u\4\c\3\b\o\6\i\1\4\m\r\6\2\y\l\z\q\0\0\0\m\r\3\f\h\1\6\9\j\l\7\1\7\z\r\9\0\8\g\m\g\f\b\v\u\j\4\z\y\c\c\5\i\c\a\x\s\j\7\i\s\0\7\1\v\5\l\d\n\u\c\q\m\h\v\v\p\n\5\5\v\f\w\u\9\1\d\d\s\p\1\z\o\q\t\l\d\p\u\g\p\v\l\s\5\y\p\x\b\n\4\i\1\k\g\u\o\d\n\1\5\n\e\n\o\6\9\6\x\g\b\m\f\x\q\o\w\o\3\p\1\2\k\p\i\r\g\u\7\a\7\x\q\w\b\h\j\0\e\d\f\1\z\y\l\h\8\m\j\i\n\3\w\c\e\7\3\k\d\v\y\u\0\1\n\6\6\g\8\g\i\l\y\c\g\9\m\x\w\h\o\m\x\s\v\g\1\v\r\m\4\e\f\q\h\5\b\g\8\f\r\b\f\7\o\m\x\z\x\2\0\q\r\s\o\r\e\q\b\y\g\x\r\u\3\x\b\t\7\7\j\e\p\4\i\b\7\8\o\b\d\h\7\s\5\d\i\t\8\4\x\f\3\k\g\u\m\u\1\t\m\2\a\3\l\x\i\d\n\3\h\4\g\d\d\j\t\h\8\7\v\0\r\f\z\t\5\l\5\g\f\5\s\z\2\c\u\6\r\3\7\d\k\6\6\4\q\k\i\r\2\m\8\b\g\9\w\j\s\s\0\i\a\6\q\8\x\u\1\0\u\1\j\l\e\2\i\2\o\2\l\x\m\c\t\y\k\h\y\8\7\u\u\4\d\s\h\s\e ]] 00:07:21.289 00:07:21.289 real 0m5.048s 00:07:21.289 user 0m2.882s 00:07:21.289 sys 0m1.186s 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:21.289 ************************************ 00:07:21.289 END TEST dd_flags_misc_forced_aio 00:07:21.289 ************************************ 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:21.289 00:07:21.289 real 0m22.569s 00:07:21.289 user 0m11.694s 00:07:21.289 sys 0m6.788s 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.289 22:37:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.289 ************************************ 00:07:21.289 END TEST spdk_dd_posix 00:07:21.289 ************************************ 00:07:21.289 22:37:36 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:21.289 22:37:36 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:21.289 22:37:36 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.289 22:37:36 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.289 22:37:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:21.289 ************************************ 00:07:21.289 START TEST spdk_dd_malloc 00:07:21.289 ************************************ 00:07:21.289 22:37:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:21.289 * Looking for test storage... 00:07:21.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:21.290 ************************************ 00:07:21.290 START TEST dd_malloc_copy 00:07:21.290 ************************************ 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:21.290 22:37:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:21.547 [2024-07-15 22:37:36.903436] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:21.548 [2024-07-15 22:37:36.903539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63583 ] 00:07:21.548 { 00:07:21.548 "subsystems": [ 00:07:21.548 { 00:07:21.548 "subsystem": "bdev", 00:07:21.548 "config": [ 00:07:21.548 { 00:07:21.548 "params": { 00:07:21.548 "block_size": 512, 00:07:21.548 "num_blocks": 1048576, 00:07:21.548 "name": "malloc0" 00:07:21.548 }, 00:07:21.548 "method": "bdev_malloc_create" 00:07:21.548 }, 00:07:21.548 { 00:07:21.548 "params": { 00:07:21.548 "block_size": 512, 00:07:21.548 "num_blocks": 1048576, 00:07:21.548 "name": "malloc1" 00:07:21.548 }, 00:07:21.548 "method": "bdev_malloc_create" 00:07:21.548 }, 00:07:21.548 { 00:07:21.548 "method": "bdev_wait_for_examine" 00:07:21.548 } 00:07:21.548 ] 00:07:21.548 } 00:07:21.548 ] 00:07:21.548 } 00:07:21.548 [2024-07-15 22:37:37.039435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.806 [2024-07-15 22:37:37.140771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.806 [2024-07-15 22:37:37.195933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.249  Copying: 207/512 [MB] (207 MBps) Copying: 410/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 205 MBps) 00:07:25.249 00:07:25.249 22:37:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:25.249 22:37:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:25.249 22:37:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:25.249 22:37:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.249 [2024-07-15 22:37:40.694598] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:25.249 [2024-07-15 22:37:40.694727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63636 ] 00:07:25.249 { 00:07:25.249 "subsystems": [ 00:07:25.249 { 00:07:25.249 "subsystem": "bdev", 00:07:25.249 "config": [ 00:07:25.249 { 00:07:25.249 "params": { 00:07:25.249 "block_size": 512, 00:07:25.249 "num_blocks": 1048576, 00:07:25.249 "name": "malloc0" 00:07:25.249 }, 00:07:25.249 "method": "bdev_malloc_create" 00:07:25.249 }, 00:07:25.249 { 00:07:25.249 "params": { 00:07:25.249 "block_size": 512, 00:07:25.249 "num_blocks": 1048576, 00:07:25.249 "name": "malloc1" 00:07:25.249 }, 00:07:25.249 "method": "bdev_malloc_create" 00:07:25.249 }, 00:07:25.249 { 00:07:25.249 "method": "bdev_wait_for_examine" 00:07:25.249 } 00:07:25.249 ] 00:07:25.249 } 00:07:25.249 ] 00:07:25.249 } 00:07:25.508 [2024-07-15 22:37:40.832602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.508 [2024-07-15 22:37:40.931575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.508 [2024-07-15 22:37:40.985785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.958  Copying: 204/512 [MB] (204 MBps) Copying: 414/512 [MB] (210 MBps) Copying: 512/512 [MB] (average 206 MBps) 00:07:28.958 00:07:28.958 00:07:28.958 real 0m7.605s 00:07:28.958 user 0m6.577s 00:07:28.958 sys 0m0.872s 00:07:28.958 22:37:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.958 ************************************ 00:07:28.958 END TEST dd_malloc_copy 00:07:28.958 ************************************ 00:07:28.958 22:37:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.958 22:37:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:28.958 00:07:28.958 real 0m7.749s 00:07:28.958 user 0m6.638s 00:07:28.958 sys 0m0.954s 00:07:28.958 22:37:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.958 ************************************ 00:07:28.958 22:37:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:28.958 END TEST spdk_dd_malloc 00:07:28.958 ************************************ 00:07:29.218 22:37:44 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:29.218 22:37:44 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:29.218 22:37:44 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.218 22:37:44 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.218 22:37:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:29.218 ************************************ 00:07:29.218 START TEST spdk_dd_bdev_to_bdev 00:07:29.218 ************************************ 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:29.218 * Looking for test storage... 00:07:29.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:29.218 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.219 ************************************ 00:07:29.219 START TEST dd_inflate_file 00:07:29.219 ************************************ 00:07:29.219 22:37:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:29.219 [2024-07-15 22:37:44.693554] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:29.219 [2024-07-15 22:37:44.693673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63741 ] 00:07:29.478 [2024-07-15 22:37:44.830272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.478 [2024-07-15 22:37:44.949669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.478 [2024-07-15 22:37:45.004339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.735  Copying: 64/64 [MB] (average 1600 MBps) 00:07:29.735 00:07:29.994 00:07:29.994 real 0m0.658s 00:07:29.994 user 0m0.413s 00:07:29.994 sys 0m0.297s 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:29.994 ************************************ 00:07:29.994 END TEST dd_inflate_file 00:07:29.994 ************************************ 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.994 ************************************ 00:07:29.994 START TEST dd_copy_to_out_bdev 00:07:29.994 ************************************ 00:07:29.994 22:37:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:29.994 { 00:07:29.994 "subsystems": [ 00:07:29.994 { 00:07:29.994 "subsystem": "bdev", 00:07:29.994 "config": [ 00:07:29.994 { 00:07:29.994 "params": { 00:07:29.994 "trtype": "pcie", 00:07:29.994 "traddr": "0000:00:10.0", 00:07:29.994 "name": "Nvme0" 00:07:29.994 }, 00:07:29.994 "method": "bdev_nvme_attach_controller" 00:07:29.994 }, 00:07:29.994 { 00:07:29.994 "params": { 00:07:29.994 "trtype": "pcie", 00:07:29.994 "traddr": "0000:00:11.0", 00:07:29.994 "name": "Nvme1" 00:07:29.994 }, 00:07:29.994 "method": "bdev_nvme_attach_controller" 00:07:29.994 }, 00:07:29.994 { 00:07:29.994 "method": "bdev_wait_for_examine" 00:07:29.994 } 00:07:29.994 ] 00:07:29.994 } 00:07:29.994 ] 00:07:29.994 } 00:07:29.994 [2024-07-15 22:37:45.415609] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:29.994 [2024-07-15 22:37:45.416180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63774 ] 00:07:29.994 [2024-07-15 22:37:45.558316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.253 [2024-07-15 22:37:45.677406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.253 [2024-07-15 22:37:45.734689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.886  Copying: 60/64 [MB] (60 MBps) Copying: 64/64 [MB] (average 60 MBps) 00:07:31.886 00:07:31.886 00:07:31.886 real 0m1.875s 00:07:31.886 user 0m1.643s 00:07:31.886 sys 0m1.407s 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:31.886 ************************************ 00:07:31.886 END TEST dd_copy_to_out_bdev 00:07:31.886 ************************************ 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:31.886 ************************************ 00:07:31.886 START TEST dd_offset_magic 00:07:31.886 ************************************ 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:31.886 22:37:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:31.886 [2024-07-15 22:37:47.347827] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:31.886 [2024-07-15 22:37:47.347962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63819 ] 00:07:31.886 { 00:07:31.886 "subsystems": [ 00:07:31.886 { 00:07:31.886 "subsystem": "bdev", 00:07:31.886 "config": [ 00:07:31.886 { 00:07:31.886 "params": { 00:07:31.886 "trtype": "pcie", 00:07:31.886 "traddr": "0000:00:10.0", 00:07:31.886 "name": "Nvme0" 00:07:31.886 }, 00:07:31.886 "method": "bdev_nvme_attach_controller" 00:07:31.886 }, 00:07:31.886 { 00:07:31.886 "params": { 00:07:31.886 "trtype": "pcie", 00:07:31.886 "traddr": "0000:00:11.0", 00:07:31.886 "name": "Nvme1" 00:07:31.886 }, 00:07:31.886 "method": "bdev_nvme_attach_controller" 00:07:31.886 }, 00:07:31.886 { 00:07:31.886 "method": "bdev_wait_for_examine" 00:07:31.886 } 00:07:31.886 ] 00:07:31.886 } 00:07:31.886 ] 00:07:31.886 } 00:07:32.145 [2024-07-15 22:37:47.488783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.145 [2024-07-15 22:37:47.617882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.145 [2024-07-15 22:37:47.677945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.697  Copying: 65/65 [MB] (average 902 MBps) 00:07:32.697 00:07:32.697 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:32.697 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:32.697 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:32.697 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:32.697 [2024-07-15 22:37:48.235962] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:32.697 [2024-07-15 22:37:48.236059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63839 ] 00:07:32.697 { 00:07:32.697 "subsystems": [ 00:07:32.697 { 00:07:32.697 "subsystem": "bdev", 00:07:32.697 "config": [ 00:07:32.697 { 00:07:32.697 "params": { 00:07:32.697 "trtype": "pcie", 00:07:32.697 "traddr": "0000:00:10.0", 00:07:32.697 "name": "Nvme0" 00:07:32.697 }, 00:07:32.697 "method": "bdev_nvme_attach_controller" 00:07:32.697 }, 00:07:32.697 { 00:07:32.697 "params": { 00:07:32.697 "trtype": "pcie", 00:07:32.697 "traddr": "0000:00:11.0", 00:07:32.697 "name": "Nvme1" 00:07:32.697 }, 00:07:32.697 "method": "bdev_nvme_attach_controller" 00:07:32.697 }, 00:07:32.697 { 00:07:32.697 "method": "bdev_wait_for_examine" 00:07:32.697 } 00:07:32.697 ] 00:07:32.697 } 00:07:32.697 ] 00:07:32.697 } 00:07:32.956 [2024-07-15 22:37:48.368513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.956 [2024-07-15 22:37:48.477853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.215 [2024-07-15 22:37:48.531858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.474  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.474 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:33.474 22:37:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:33.474 [2024-07-15 22:37:48.994545] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:33.474 [2024-07-15 22:37:48.994684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63856 ] 00:07:33.474 { 00:07:33.474 "subsystems": [ 00:07:33.474 { 00:07:33.474 "subsystem": "bdev", 00:07:33.474 "config": [ 00:07:33.474 { 00:07:33.474 "params": { 00:07:33.474 "trtype": "pcie", 00:07:33.474 "traddr": "0000:00:10.0", 00:07:33.474 "name": "Nvme0" 00:07:33.474 }, 00:07:33.474 "method": "bdev_nvme_attach_controller" 00:07:33.474 }, 00:07:33.474 { 00:07:33.474 "params": { 00:07:33.474 "trtype": "pcie", 00:07:33.474 "traddr": "0000:00:11.0", 00:07:33.474 "name": "Nvme1" 00:07:33.474 }, 00:07:33.474 "method": "bdev_nvme_attach_controller" 00:07:33.474 }, 00:07:33.474 { 00:07:33.474 "method": "bdev_wait_for_examine" 00:07:33.474 } 00:07:33.474 ] 00:07:33.474 } 00:07:33.474 ] 00:07:33.474 } 00:07:33.732 [2024-07-15 22:37:49.132441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.732 [2024-07-15 22:37:49.243279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.732 [2024-07-15 22:37:49.298330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.249  Copying: 65/65 [MB] (average 1048 MBps) 00:07:34.249 00:07:34.249 22:37:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:34.249 22:37:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:34.249 22:37:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:34.249 22:37:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:34.508 { 00:07:34.508 "subsystems": [ 00:07:34.508 { 00:07:34.508 "subsystem": "bdev", 00:07:34.508 "config": [ 00:07:34.508 { 00:07:34.509 "params": { 00:07:34.509 "trtype": "pcie", 00:07:34.509 "traddr": "0000:00:10.0", 00:07:34.509 "name": "Nvme0" 00:07:34.509 }, 00:07:34.509 "method": "bdev_nvme_attach_controller" 00:07:34.509 }, 00:07:34.509 { 00:07:34.509 "params": { 00:07:34.509 "trtype": "pcie", 00:07:34.509 "traddr": "0000:00:11.0", 00:07:34.509 "name": "Nvme1" 00:07:34.509 }, 00:07:34.509 "method": "bdev_nvme_attach_controller" 00:07:34.509 }, 00:07:34.509 { 00:07:34.509 "method": "bdev_wait_for_examine" 00:07:34.509 } 00:07:34.509 ] 00:07:34.509 } 00:07:34.509 ] 00:07:34.509 } 00:07:34.509 [2024-07-15 22:37:49.863926] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:34.509 [2024-07-15 22:37:49.864064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63876 ] 00:07:34.509 [2024-07-15 22:37:50.013546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.766 [2024-07-15 22:37:50.118573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.766 [2024-07-15 22:37:50.173538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.025  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:35.025 00:07:35.025 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:35.025 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:35.025 00:07:35.025 real 0m3.300s 00:07:35.025 user 0m2.446s 00:07:35.025 sys 0m0.943s 00:07:35.025 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.025 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:35.025 ************************************ 00:07:35.025 END TEST dd_offset_magic 00:07:35.025 ************************************ 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:35.284 22:37:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:35.284 [2024-07-15 22:37:50.688490] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:35.284 [2024-07-15 22:37:50.688610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63907 ] 00:07:35.284 { 00:07:35.284 "subsystems": [ 00:07:35.284 { 00:07:35.284 "subsystem": "bdev", 00:07:35.284 "config": [ 00:07:35.284 { 00:07:35.284 "params": { 00:07:35.284 "trtype": "pcie", 00:07:35.284 "traddr": "0000:00:10.0", 00:07:35.284 "name": "Nvme0" 00:07:35.284 }, 00:07:35.284 "method": "bdev_nvme_attach_controller" 00:07:35.284 }, 00:07:35.284 { 00:07:35.284 "params": { 00:07:35.284 "trtype": "pcie", 00:07:35.284 "traddr": "0000:00:11.0", 00:07:35.284 "name": "Nvme1" 00:07:35.284 }, 00:07:35.284 "method": "bdev_nvme_attach_controller" 00:07:35.284 }, 00:07:35.284 { 00:07:35.284 "method": "bdev_wait_for_examine" 00:07:35.284 } 00:07:35.284 ] 00:07:35.284 } 00:07:35.284 ] 00:07:35.284 } 00:07:35.284 [2024-07-15 22:37:50.825208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.543 [2024-07-15 22:37:50.932564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.543 [2024-07-15 22:37:50.988210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.060  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:36.060 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:36.060 22:37:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.060 [2024-07-15 22:37:51.443243] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:36.060 [2024-07-15 22:37:51.443386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63928 ] 00:07:36.060 { 00:07:36.060 "subsystems": [ 00:07:36.060 { 00:07:36.060 "subsystem": "bdev", 00:07:36.060 "config": [ 00:07:36.060 { 00:07:36.060 "params": { 00:07:36.060 "trtype": "pcie", 00:07:36.060 "traddr": "0000:00:10.0", 00:07:36.060 "name": "Nvme0" 00:07:36.060 }, 00:07:36.060 "method": "bdev_nvme_attach_controller" 00:07:36.060 }, 00:07:36.060 { 00:07:36.060 "params": { 00:07:36.060 "trtype": "pcie", 00:07:36.060 "traddr": "0000:00:11.0", 00:07:36.060 "name": "Nvme1" 00:07:36.060 }, 00:07:36.060 "method": "bdev_nvme_attach_controller" 00:07:36.060 }, 00:07:36.060 { 00:07:36.060 "method": "bdev_wait_for_examine" 00:07:36.060 } 00:07:36.060 ] 00:07:36.060 } 00:07:36.060 ] 00:07:36.060 } 00:07:36.060 [2024-07-15 22:37:51.582552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.319 [2024-07-15 22:37:51.695145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.319 [2024-07-15 22:37:51.751681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.835  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:36.835 00:07:36.835 22:37:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:36.835 00:07:36.835 real 0m7.634s 00:07:36.835 user 0m5.696s 00:07:36.835 sys 0m3.350s 00:07:36.835 22:37:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.835 22:37:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:36.835 ************************************ 00:07:36.835 END TEST spdk_dd_bdev_to_bdev 00:07:36.835 ************************************ 00:07:36.835 22:37:52 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:36.835 22:37:52 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:36.835 22:37:52 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:36.835 22:37:52 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.835 22:37:52 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.835 22:37:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.835 ************************************ 00:07:36.835 START TEST spdk_dd_uring 00:07:36.835 ************************************ 00:07:36.835 22:37:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:36.835 * Looking for test storage... 00:07:36.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.835 22:37:52 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.835 22:37:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.835 22:37:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.835 22:37:52 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:36.836 ************************************ 00:07:36.836 START TEST dd_uring_copy 00:07:36.836 ************************************ 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=xxwwxx1qaaeoioclgrex3lbfutzw11ufs9acfd37gm1p468khlnyyaajeg0a54harz1feoz6voauzo55rhqo4ycxnwsoxzetxdmqyd0h2hciiv4ktb5fp1dgj5x9vhh9mrgd2jtxbyqvyo902c97s0aukpanf1sktz30u3j7v8j3jncgruzoltmr106zz1k62q5zjjlre9u0wi15qkqt5wsx2ziyqvngjyq6vu7n9y9nkefn30bwgi5xpe4ajc7cu1qx0358g7zk52lncblcicbrihh82xml8e3gg8umxvx6jnhdpvbzt52i9y3hhep229079clckz0dh1gpt2skqq018s5pn9qf25du5ubrthhyxo1iiu3mvkr0u71o08xc7636fuwe92pcg4m92y9vi9b91ik45hdp5dmvb5uvfytdj4fpzns06cqx2lmydzubrr78v899qzlnbksw2mtxelmxys1hu1wmcc2e7w35tfsp0smiul7zc0yg1f2i40v3gmaj2ynghe0gr55o2bm11ti4k0xy4vldhqrlwnynhx2wh7myxtw0o65ijsubjiz553exib9ng5jg56plk0b0nmmlt2rco7140c3qxbu576dweh1f7k85cbigjrpvjao6e3oha4al6swkd1u80iedbc4o4p4jei0numuqibfhhy535wb62z4s0hk7ew431j4cv33o0ykglqm63ccsufxv3thf1zm83f0ml7y4antwvjvaqzge432nl2e4ggjjgljsvuasz2sdi3uhhs61nr5l6myvb9cipzbjzdr6d3qecktk2pyudkqp46ed43n159tp6tizooh8o73fizo97bk3tinrtweohqjk6ev3wi6tzbuq6cddnwclozetqcznjti3lazg1m1vclhuv9zv60gxx4fo8u3is6zsusdobaz0kdlehjo4ur7lmr1d21v35cyzw1vsi5u84y7apxhkgr3zqlyw9yffuzqzhvv2bp352us2fmqd7nntjg9b890sxd9c 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo xxwwxx1qaaeoioclgrex3lbfutzw11ufs9acfd37gm1p468khlnyyaajeg0a54harz1feoz6voauzo55rhqo4ycxnwsoxzetxdmqyd0h2hciiv4ktb5fp1dgj5x9vhh9mrgd2jtxbyqvyo902c97s0aukpanf1sktz30u3j7v8j3jncgruzoltmr106zz1k62q5zjjlre9u0wi15qkqt5wsx2ziyqvngjyq6vu7n9y9nkefn30bwgi5xpe4ajc7cu1qx0358g7zk52lncblcicbrihh82xml8e3gg8umxvx6jnhdpvbzt52i9y3hhep229079clckz0dh1gpt2skqq018s5pn9qf25du5ubrthhyxo1iiu3mvkr0u71o08xc7636fuwe92pcg4m92y9vi9b91ik45hdp5dmvb5uvfytdj4fpzns06cqx2lmydzubrr78v899qzlnbksw2mtxelmxys1hu1wmcc2e7w35tfsp0smiul7zc0yg1f2i40v3gmaj2ynghe0gr55o2bm11ti4k0xy4vldhqrlwnynhx2wh7myxtw0o65ijsubjiz553exib9ng5jg56plk0b0nmmlt2rco7140c3qxbu576dweh1f7k85cbigjrpvjao6e3oha4al6swkd1u80iedbc4o4p4jei0numuqibfhhy535wb62z4s0hk7ew431j4cv33o0ykglqm63ccsufxv3thf1zm83f0ml7y4antwvjvaqzge432nl2e4ggjjgljsvuasz2sdi3uhhs61nr5l6myvb9cipzbjzdr6d3qecktk2pyudkqp46ed43n159tp6tizooh8o73fizo97bk3tinrtweohqjk6ev3wi6tzbuq6cddnwclozetqcznjti3lazg1m1vclhuv9zv60gxx4fo8u3is6zsusdobaz0kdlehjo4ur7lmr1d21v35cyzw1vsi5u84y7apxhkgr3zqlyw9yffuzqzhvv2bp352us2fmqd7nntjg9b890sxd9c 00:07:36.836 22:37:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:37.095 [2024-07-15 22:37:52.410741] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:37.095 [2024-07-15 22:37:52.410872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63997 ] 00:07:37.095 [2024-07-15 22:37:52.552056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.373 [2024-07-15 22:37:52.683369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.373 [2024-07-15 22:37:52.742231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.195  Copying: 511/511 [MB] (average 1484 MBps) 00:07:38.195 00:07:38.195 22:37:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:38.195 22:37:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:38.195 22:37:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:38.195 22:37:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.456 [2024-07-15 22:37:53.793148] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:38.456 [2024-07-15 22:37:53.793248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64020 ] 00:07:38.456 { 00:07:38.456 "subsystems": [ 00:07:38.456 { 00:07:38.456 "subsystem": "bdev", 00:07:38.456 "config": [ 00:07:38.456 { 00:07:38.456 "params": { 00:07:38.456 "block_size": 512, 00:07:38.456 "num_blocks": 1048576, 00:07:38.456 "name": "malloc0" 00:07:38.456 }, 00:07:38.456 "method": "bdev_malloc_create" 00:07:38.456 }, 00:07:38.456 { 00:07:38.456 "params": { 00:07:38.456 "filename": "/dev/zram1", 00:07:38.456 "name": "uring0" 00:07:38.456 }, 00:07:38.456 "method": "bdev_uring_create" 00:07:38.456 }, 00:07:38.456 { 00:07:38.456 "method": "bdev_wait_for_examine" 00:07:38.456 } 00:07:38.456 ] 00:07:38.456 } 00:07:38.456 ] 00:07:38.456 } 00:07:38.456 [2024-07-15 22:37:53.933607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.714 [2024-07-15 22:37:54.048822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.714 [2024-07-15 22:37:54.104077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.722  Copying: 220/512 [MB] (220 MBps) Copying: 441/512 [MB] (221 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:07:41.722 00:07:41.722 22:37:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:41.722 22:37:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:41.722 22:37:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:41.722 22:37:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.722 [2024-07-15 22:37:57.096109] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:41.723 [2024-07-15 22:37:57.096223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64064 ] 00:07:41.723 { 00:07:41.723 "subsystems": [ 00:07:41.723 { 00:07:41.723 "subsystem": "bdev", 00:07:41.723 "config": [ 00:07:41.723 { 00:07:41.723 "params": { 00:07:41.723 "block_size": 512, 00:07:41.723 "num_blocks": 1048576, 00:07:41.723 "name": "malloc0" 00:07:41.723 }, 00:07:41.723 "method": "bdev_malloc_create" 00:07:41.723 }, 00:07:41.723 { 00:07:41.723 "params": { 00:07:41.723 "filename": "/dev/zram1", 00:07:41.723 "name": "uring0" 00:07:41.723 }, 00:07:41.723 "method": "bdev_uring_create" 00:07:41.723 }, 00:07:41.723 { 00:07:41.723 "method": "bdev_wait_for_examine" 00:07:41.723 } 00:07:41.723 ] 00:07:41.723 } 00:07:41.723 ] 00:07:41.723 } 00:07:41.723 [2024-07-15 22:37:57.233106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.982 [2024-07-15 22:37:57.367014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.982 [2024-07-15 22:37:57.423769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.792  Copying: 180/512 [MB] (180 MBps) Copying: 353/512 [MB] (172 MBps) Copying: 509/512 [MB] (155 MBps) Copying: 512/512 [MB] (average 169 MBps) 00:07:45.792 00:07:45.792 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:45.792 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ xxwwxx1qaaeoioclgrex3lbfutzw11ufs9acfd37gm1p468khlnyyaajeg0a54harz1feoz6voauzo55rhqo4ycxnwsoxzetxdmqyd0h2hciiv4ktb5fp1dgj5x9vhh9mrgd2jtxbyqvyo902c97s0aukpanf1sktz30u3j7v8j3jncgruzoltmr106zz1k62q5zjjlre9u0wi15qkqt5wsx2ziyqvngjyq6vu7n9y9nkefn30bwgi5xpe4ajc7cu1qx0358g7zk52lncblcicbrihh82xml8e3gg8umxvx6jnhdpvbzt52i9y3hhep229079clckz0dh1gpt2skqq018s5pn9qf25du5ubrthhyxo1iiu3mvkr0u71o08xc7636fuwe92pcg4m92y9vi9b91ik45hdp5dmvb5uvfytdj4fpzns06cqx2lmydzubrr78v899qzlnbksw2mtxelmxys1hu1wmcc2e7w35tfsp0smiul7zc0yg1f2i40v3gmaj2ynghe0gr55o2bm11ti4k0xy4vldhqrlwnynhx2wh7myxtw0o65ijsubjiz553exib9ng5jg56plk0b0nmmlt2rco7140c3qxbu576dweh1f7k85cbigjrpvjao6e3oha4al6swkd1u80iedbc4o4p4jei0numuqibfhhy535wb62z4s0hk7ew431j4cv33o0ykglqm63ccsufxv3thf1zm83f0ml7y4antwvjvaqzge432nl2e4ggjjgljsvuasz2sdi3uhhs61nr5l6myvb9cipzbjzdr6d3qecktk2pyudkqp46ed43n159tp6tizooh8o73fizo97bk3tinrtweohqjk6ev3wi6tzbuq6cddnwclozetqcznjti3lazg1m1vclhuv9zv60gxx4fo8u3is6zsusdobaz0kdlehjo4ur7lmr1d21v35cyzw1vsi5u84y7apxhkgr3zqlyw9yffuzqzhvv2bp352us2fmqd7nntjg9b890sxd9c == \x\x\w\w\x\x\1\q\a\a\e\o\i\o\c\l\g\r\e\x\3\l\b\f\u\t\z\w\1\1\u\f\s\9\a\c\f\d\3\7\g\m\1\p\4\6\8\k\h\l\n\y\y\a\a\j\e\g\0\a\5\4\h\a\r\z\1\f\e\o\z\6\v\o\a\u\z\o\5\5\r\h\q\o\4\y\c\x\n\w\s\o\x\z\e\t\x\d\m\q\y\d\0\h\2\h\c\i\i\v\4\k\t\b\5\f\p\1\d\g\j\5\x\9\v\h\h\9\m\r\g\d\2\j\t\x\b\y\q\v\y\o\9\0\2\c\9\7\s\0\a\u\k\p\a\n\f\1\s\k\t\z\3\0\u\3\j\7\v\8\j\3\j\n\c\g\r\u\z\o\l\t\m\r\1\0\6\z\z\1\k\6\2\q\5\z\j\j\l\r\e\9\u\0\w\i\1\5\q\k\q\t\5\w\s\x\2\z\i\y\q\v\n\g\j\y\q\6\v\u\7\n\9\y\9\n\k\e\f\n\3\0\b\w\g\i\5\x\p\e\4\a\j\c\7\c\u\1\q\x\0\3\5\8\g\7\z\k\5\2\l\n\c\b\l\c\i\c\b\r\i\h\h\8\2\x\m\l\8\e\3\g\g\8\u\m\x\v\x\6\j\n\h\d\p\v\b\z\t\5\2\i\9\y\3\h\h\e\p\2\2\9\0\7\9\c\l\c\k\z\0\d\h\1\g\p\t\2\s\k\q\q\0\1\8\s\5\p\n\9\q\f\2\5\d\u\5\u\b\r\t\h\h\y\x\o\1\i\i\u\3\m\v\k\r\0\u\7\1\o\0\8\x\c\7\6\3\6\f\u\w\e\9\2\p\c\g\4\m\9\2\y\9\v\i\9\b\9\1\i\k\4\5\h\d\p\5\d\m\v\b\5\u\v\f\y\t\d\j\4\f\p\z\n\s\0\6\c\q\x\2\l\m\y\d\z\u\b\r\r\7\8\v\8\9\9\q\z\l\n\b\k\s\w\2\m\t\x\e\l\m\x\y\s\1\h\u\1\w\m\c\c\2\e\7\w\3\5\t\f\s\p\0\s\m\i\u\l\7\z\c\0\y\g\1\f\2\i\4\0\v\3\g\m\a\j\2\y\n\g\h\e\0\g\r\5\5\o\2\b\m\1\1\t\i\4\k\0\x\y\4\v\l\d\h\q\r\l\w\n\y\n\h\x\2\w\h\7\m\y\x\t\w\0\o\6\5\i\j\s\u\b\j\i\z\5\5\3\e\x\i\b\9\n\g\5\j\g\5\6\p\l\k\0\b\0\n\m\m\l\t\2\r\c\o\7\1\4\0\c\3\q\x\b\u\5\7\6\d\w\e\h\1\f\7\k\8\5\c\b\i\g\j\r\p\v\j\a\o\6\e\3\o\h\a\4\a\l\6\s\w\k\d\1\u\8\0\i\e\d\b\c\4\o\4\p\4\j\e\i\0\n\u\m\u\q\i\b\f\h\h\y\5\3\5\w\b\6\2\z\4\s\0\h\k\7\e\w\4\3\1\j\4\c\v\3\3\o\0\y\k\g\l\q\m\6\3\c\c\s\u\f\x\v\3\t\h\f\1\z\m\8\3\f\0\m\l\7\y\4\a\n\t\w\v\j\v\a\q\z\g\e\4\3\2\n\l\2\e\4\g\g\j\j\g\l\j\s\v\u\a\s\z\2\s\d\i\3\u\h\h\s\6\1\n\r\5\l\6\m\y\v\b\9\c\i\p\z\b\j\z\d\r\6\d\3\q\e\c\k\t\k\2\p\y\u\d\k\q\p\4\6\e\d\4\3\n\1\5\9\t\p\6\t\i\z\o\o\h\8\o\7\3\f\i\z\o\9\7\b\k\3\t\i\n\r\t\w\e\o\h\q\j\k\6\e\v\3\w\i\6\t\z\b\u\q\6\c\d\d\n\w\c\l\o\z\e\t\q\c\z\n\j\t\i\3\l\a\z\g\1\m\1\v\c\l\h\u\v\9\z\v\6\0\g\x\x\4\f\o\8\u\3\i\s\6\z\s\u\s\d\o\b\a\z\0\k\d\l\e\h\j\o\4\u\r\7\l\m\r\1\d\2\1\v\3\5\c\y\z\w\1\v\s\i\5\u\8\4\y\7\a\p\x\h\k\g\r\3\z\q\l\y\w\9\y\f\f\u\z\q\z\h\v\v\2\b\p\3\5\2\u\s\2\f\m\q\d\7\n\n\t\j\g\9\b\8\9\0\s\x\d\9\c ]] 00:07:45.792 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:45.792 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ xxwwxx1qaaeoioclgrex3lbfutzw11ufs9acfd37gm1p468khlnyyaajeg0a54harz1feoz6voauzo55rhqo4ycxnwsoxzetxdmqyd0h2hciiv4ktb5fp1dgj5x9vhh9mrgd2jtxbyqvyo902c97s0aukpanf1sktz30u3j7v8j3jncgruzoltmr106zz1k62q5zjjlre9u0wi15qkqt5wsx2ziyqvngjyq6vu7n9y9nkefn30bwgi5xpe4ajc7cu1qx0358g7zk52lncblcicbrihh82xml8e3gg8umxvx6jnhdpvbzt52i9y3hhep229079clckz0dh1gpt2skqq018s5pn9qf25du5ubrthhyxo1iiu3mvkr0u71o08xc7636fuwe92pcg4m92y9vi9b91ik45hdp5dmvb5uvfytdj4fpzns06cqx2lmydzubrr78v899qzlnbksw2mtxelmxys1hu1wmcc2e7w35tfsp0smiul7zc0yg1f2i40v3gmaj2ynghe0gr55o2bm11ti4k0xy4vldhqrlwnynhx2wh7myxtw0o65ijsubjiz553exib9ng5jg56plk0b0nmmlt2rco7140c3qxbu576dweh1f7k85cbigjrpvjao6e3oha4al6swkd1u80iedbc4o4p4jei0numuqibfhhy535wb62z4s0hk7ew431j4cv33o0ykglqm63ccsufxv3thf1zm83f0ml7y4antwvjvaqzge432nl2e4ggjjgljsvuasz2sdi3uhhs61nr5l6myvb9cipzbjzdr6d3qecktk2pyudkqp46ed43n159tp6tizooh8o73fizo97bk3tinrtweohqjk6ev3wi6tzbuq6cddnwclozetqcznjti3lazg1m1vclhuv9zv60gxx4fo8u3is6zsusdobaz0kdlehjo4ur7lmr1d21v35cyzw1vsi5u84y7apxhkgr3zqlyw9yffuzqzhvv2bp352us2fmqd7nntjg9b890sxd9c == \x\x\w\w\x\x\1\q\a\a\e\o\i\o\c\l\g\r\e\x\3\l\b\f\u\t\z\w\1\1\u\f\s\9\a\c\f\d\3\7\g\m\1\p\4\6\8\k\h\l\n\y\y\a\a\j\e\g\0\a\5\4\h\a\r\z\1\f\e\o\z\6\v\o\a\u\z\o\5\5\r\h\q\o\4\y\c\x\n\w\s\o\x\z\e\t\x\d\m\q\y\d\0\h\2\h\c\i\i\v\4\k\t\b\5\f\p\1\d\g\j\5\x\9\v\h\h\9\m\r\g\d\2\j\t\x\b\y\q\v\y\o\9\0\2\c\9\7\s\0\a\u\k\p\a\n\f\1\s\k\t\z\3\0\u\3\j\7\v\8\j\3\j\n\c\g\r\u\z\o\l\t\m\r\1\0\6\z\z\1\k\6\2\q\5\z\j\j\l\r\e\9\u\0\w\i\1\5\q\k\q\t\5\w\s\x\2\z\i\y\q\v\n\g\j\y\q\6\v\u\7\n\9\y\9\n\k\e\f\n\3\0\b\w\g\i\5\x\p\e\4\a\j\c\7\c\u\1\q\x\0\3\5\8\g\7\z\k\5\2\l\n\c\b\l\c\i\c\b\r\i\h\h\8\2\x\m\l\8\e\3\g\g\8\u\m\x\v\x\6\j\n\h\d\p\v\b\z\t\5\2\i\9\y\3\h\h\e\p\2\2\9\0\7\9\c\l\c\k\z\0\d\h\1\g\p\t\2\s\k\q\q\0\1\8\s\5\p\n\9\q\f\2\5\d\u\5\u\b\r\t\h\h\y\x\o\1\i\i\u\3\m\v\k\r\0\u\7\1\o\0\8\x\c\7\6\3\6\f\u\w\e\9\2\p\c\g\4\m\9\2\y\9\v\i\9\b\9\1\i\k\4\5\h\d\p\5\d\m\v\b\5\u\v\f\y\t\d\j\4\f\p\z\n\s\0\6\c\q\x\2\l\m\y\d\z\u\b\r\r\7\8\v\8\9\9\q\z\l\n\b\k\s\w\2\m\t\x\e\l\m\x\y\s\1\h\u\1\w\m\c\c\2\e\7\w\3\5\t\f\s\p\0\s\m\i\u\l\7\z\c\0\y\g\1\f\2\i\4\0\v\3\g\m\a\j\2\y\n\g\h\e\0\g\r\5\5\o\2\b\m\1\1\t\i\4\k\0\x\y\4\v\l\d\h\q\r\l\w\n\y\n\h\x\2\w\h\7\m\y\x\t\w\0\o\6\5\i\j\s\u\b\j\i\z\5\5\3\e\x\i\b\9\n\g\5\j\g\5\6\p\l\k\0\b\0\n\m\m\l\t\2\r\c\o\7\1\4\0\c\3\q\x\b\u\5\7\6\d\w\e\h\1\f\7\k\8\5\c\b\i\g\j\r\p\v\j\a\o\6\e\3\o\h\a\4\a\l\6\s\w\k\d\1\u\8\0\i\e\d\b\c\4\o\4\p\4\j\e\i\0\n\u\m\u\q\i\b\f\h\h\y\5\3\5\w\b\6\2\z\4\s\0\h\k\7\e\w\4\3\1\j\4\c\v\3\3\o\0\y\k\g\l\q\m\6\3\c\c\s\u\f\x\v\3\t\h\f\1\z\m\8\3\f\0\m\l\7\y\4\a\n\t\w\v\j\v\a\q\z\g\e\4\3\2\n\l\2\e\4\g\g\j\j\g\l\j\s\v\u\a\s\z\2\s\d\i\3\u\h\h\s\6\1\n\r\5\l\6\m\y\v\b\9\c\i\p\z\b\j\z\d\r\6\d\3\q\e\c\k\t\k\2\p\y\u\d\k\q\p\4\6\e\d\4\3\n\1\5\9\t\p\6\t\i\z\o\o\h\8\o\7\3\f\i\z\o\9\7\b\k\3\t\i\n\r\t\w\e\o\h\q\j\k\6\e\v\3\w\i\6\t\z\b\u\q\6\c\d\d\n\w\c\l\o\z\e\t\q\c\z\n\j\t\i\3\l\a\z\g\1\m\1\v\c\l\h\u\v\9\z\v\6\0\g\x\x\4\f\o\8\u\3\i\s\6\z\s\u\s\d\o\b\a\z\0\k\d\l\e\h\j\o\4\u\r\7\l\m\r\1\d\2\1\v\3\5\c\y\z\w\1\v\s\i\5\u\8\4\y\7\a\p\x\h\k\g\r\3\z\q\l\y\w\9\y\f\f\u\z\q\z\h\v\v\2\b\p\3\5\2\u\s\2\f\m\q\d\7\n\n\t\j\g\9\b\8\9\0\s\x\d\9\c ]] 00:07:45.793 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:46.051 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:46.051 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:46.051 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:46.051 22:38:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.051 [2024-07-15 22:38:01.512652] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:46.051 [2024-07-15 22:38:01.512779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64145 ] 00:07:46.051 { 00:07:46.051 "subsystems": [ 00:07:46.051 { 00:07:46.051 "subsystem": "bdev", 00:07:46.051 "config": [ 00:07:46.051 { 00:07:46.051 "params": { 00:07:46.051 "block_size": 512, 00:07:46.051 "num_blocks": 1048576, 00:07:46.051 "name": "malloc0" 00:07:46.051 }, 00:07:46.051 "method": "bdev_malloc_create" 00:07:46.052 }, 00:07:46.052 { 00:07:46.052 "params": { 00:07:46.052 "filename": "/dev/zram1", 00:07:46.052 "name": "uring0" 00:07:46.052 }, 00:07:46.052 "method": "bdev_uring_create" 00:07:46.052 }, 00:07:46.052 { 00:07:46.052 "method": "bdev_wait_for_examine" 00:07:46.052 } 00:07:46.052 ] 00:07:46.052 } 00:07:46.052 ] 00:07:46.052 } 00:07:46.310 [2024-07-15 22:38:01.653286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.310 [2024-07-15 22:38:01.772886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.310 [2024-07-15 22:38:01.834455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.382  Copying: 149/512 [MB] (149 MBps) Copying: 298/512 [MB] (149 MBps) Copying: 450/512 [MB] (151 MBps) Copying: 512/512 [MB] (average 149 MBps) 00:07:50.382 00:07:50.382 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:50.383 22:38:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:50.641 [2024-07-15 22:38:05.956728] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:50.641 [2024-07-15 22:38:05.956861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64201 ] 00:07:50.641 { 00:07:50.641 "subsystems": [ 00:07:50.641 { 00:07:50.641 "subsystem": "bdev", 00:07:50.641 "config": [ 00:07:50.641 { 00:07:50.641 "params": { 00:07:50.641 "block_size": 512, 00:07:50.641 "num_blocks": 1048576, 00:07:50.641 "name": "malloc0" 00:07:50.641 }, 00:07:50.641 "method": "bdev_malloc_create" 00:07:50.641 }, 00:07:50.641 { 00:07:50.641 "params": { 00:07:50.641 "filename": "/dev/zram1", 00:07:50.641 "name": "uring0" 00:07:50.641 }, 00:07:50.641 "method": "bdev_uring_create" 00:07:50.641 }, 00:07:50.641 { 00:07:50.641 "params": { 00:07:50.641 "name": "uring0" 00:07:50.641 }, 00:07:50.641 "method": "bdev_uring_delete" 00:07:50.641 }, 00:07:50.641 { 00:07:50.641 "method": "bdev_wait_for_examine" 00:07:50.641 } 00:07:50.641 ] 00:07:50.641 } 00:07:50.641 ] 00:07:50.641 } 00:07:50.641 [2024-07-15 22:38:06.097429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.901 [2024-07-15 22:38:06.216735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.901 [2024-07-15 22:38:06.278124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.418  Copying: 0/0 [B] (average 0 Bps) 00:07:51.418 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.418 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.419 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.419 22:38:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:51.677 [2024-07-15 22:38:07.025472] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:51.677 [2024-07-15 22:38:07.025610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64230 ] 00:07:51.677 { 00:07:51.677 "subsystems": [ 00:07:51.677 { 00:07:51.677 "subsystem": "bdev", 00:07:51.677 "config": [ 00:07:51.677 { 00:07:51.677 "params": { 00:07:51.677 "block_size": 512, 00:07:51.677 "num_blocks": 1048576, 00:07:51.677 "name": "malloc0" 00:07:51.677 }, 00:07:51.677 "method": "bdev_malloc_create" 00:07:51.677 }, 00:07:51.677 { 00:07:51.677 "params": { 00:07:51.677 "filename": "/dev/zram1", 00:07:51.677 "name": "uring0" 00:07:51.677 }, 00:07:51.677 "method": "bdev_uring_create" 00:07:51.677 }, 00:07:51.677 { 00:07:51.677 "params": { 00:07:51.677 "name": "uring0" 00:07:51.677 }, 00:07:51.677 "method": "bdev_uring_delete" 00:07:51.677 }, 00:07:51.677 { 00:07:51.677 "method": "bdev_wait_for_examine" 00:07:51.677 } 00:07:51.677 ] 00:07:51.677 } 00:07:51.677 ] 00:07:51.677 } 00:07:51.677 [2024-07-15 22:38:07.167550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.937 [2024-07-15 22:38:07.286824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.937 [2024-07-15 22:38:07.346874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.195 [2024-07-15 22:38:07.562868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:52.195 [2024-07-15 22:38:07.562933] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:52.195 [2024-07-15 22:38:07.562946] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:52.195 [2024-07-15 22:38:07.562956] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.454 [2024-07-15 22:38:07.897473] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:52.454 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:52.713 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:52.713 ************************************ 00:07:52.713 END TEST dd_uring_copy 00:07:52.713 ************************************ 00:07:52.713 00:07:52.713 real 0m15.941s 00:07:52.713 user 0m10.850s 00:07:52.713 sys 0m12.784s 00:07:52.713 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.713 22:38:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.972 22:38:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:52.972 ************************************ 00:07:52.972 END TEST spdk_dd_uring 00:07:52.972 ************************************ 00:07:52.972 00:07:52.972 real 0m16.081s 00:07:52.972 user 0m10.913s 00:07:52.972 sys 0m12.863s 00:07:52.972 22:38:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.972 22:38:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:52.972 22:38:08 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:52.972 22:38:08 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:52.972 22:38:08 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.972 22:38:08 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.972 22:38:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:52.972 ************************************ 00:07:52.972 START TEST spdk_dd_sparse 00:07:52.972 ************************************ 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:52.972 * Looking for test storage... 00:07:52.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:52.972 22:38:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:52.973 1+0 records in 00:07:52.973 1+0 records out 00:07:52.973 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00746318 s, 562 MB/s 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:52.973 1+0 records in 00:07:52.973 1+0 records out 00:07:52.973 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00421415 s, 995 MB/s 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:52.973 1+0 records in 00:07:52.973 1+0 records out 00:07:52.973 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0110002 s, 381 MB/s 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:52.973 ************************************ 00:07:52.973 START TEST dd_sparse_file_to_file 00:07:52.973 ************************************ 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:52.973 22:38:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:53.232 [2024-07-15 22:38:08.546146] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:53.232 [2024-07-15 22:38:08.546704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64327 ] 00:07:53.232 { 00:07:53.232 "subsystems": [ 00:07:53.232 { 00:07:53.232 "subsystem": "bdev", 00:07:53.232 "config": [ 00:07:53.232 { 00:07:53.232 "params": { 00:07:53.232 "block_size": 4096, 00:07:53.232 "filename": "dd_sparse_aio_disk", 00:07:53.232 "name": "dd_aio" 00:07:53.232 }, 00:07:53.232 "method": "bdev_aio_create" 00:07:53.232 }, 00:07:53.232 { 00:07:53.232 "params": { 00:07:53.232 "lvs_name": "dd_lvstore", 00:07:53.232 "bdev_name": "dd_aio" 00:07:53.232 }, 00:07:53.232 "method": "bdev_lvol_create_lvstore" 00:07:53.232 }, 00:07:53.232 { 00:07:53.232 "method": "bdev_wait_for_examine" 00:07:53.232 } 00:07:53.232 ] 00:07:53.232 } 00:07:53.232 ] 00:07:53.232 } 00:07:53.232 [2024-07-15 22:38:08.684177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.490 [2024-07-15 22:38:08.818596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.490 [2024-07-15 22:38:08.880179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.749  Copying: 12/36 [MB] (average 923 MBps) 00:07:53.749 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:53.749 00:07:53.749 real 0m0.772s 00:07:53.749 user 0m0.490s 00:07:53.749 sys 0m0.385s 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:53.749 ************************************ 00:07:53.749 END TEST dd_sparse_file_to_file 00:07:53.749 ************************************ 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.749 22:38:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:54.008 ************************************ 00:07:54.008 START TEST dd_sparse_file_to_bdev 00:07:54.008 ************************************ 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:54.008 22:38:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:54.008 [2024-07-15 22:38:09.381324] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:54.008 [2024-07-15 22:38:09.381429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64370 ] 00:07:54.008 { 00:07:54.008 "subsystems": [ 00:07:54.008 { 00:07:54.008 "subsystem": "bdev", 00:07:54.008 "config": [ 00:07:54.008 { 00:07:54.008 "params": { 00:07:54.008 "block_size": 4096, 00:07:54.008 "filename": "dd_sparse_aio_disk", 00:07:54.008 "name": "dd_aio" 00:07:54.008 }, 00:07:54.008 "method": "bdev_aio_create" 00:07:54.008 }, 00:07:54.008 { 00:07:54.008 "params": { 00:07:54.008 "lvs_name": "dd_lvstore", 00:07:54.008 "lvol_name": "dd_lvol", 00:07:54.008 "size_in_mib": 36, 00:07:54.008 "thin_provision": true 00:07:54.008 }, 00:07:54.008 "method": "bdev_lvol_create" 00:07:54.008 }, 00:07:54.008 { 00:07:54.008 "method": "bdev_wait_for_examine" 00:07:54.008 } 00:07:54.008 ] 00:07:54.008 } 00:07:54.008 ] 00:07:54.008 } 00:07:54.008 [2024-07-15 22:38:09.523430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.266 [2024-07-15 22:38:09.657313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.266 [2024-07-15 22:38:09.719267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.524  Copying: 12/36 [MB] (average 521 MBps) 00:07:54.524 00:07:54.524 00:07:54.524 real 0m0.743s 00:07:54.524 user 0m0.489s 00:07:54.524 sys 0m0.371s 00:07:54.525 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.525 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:54.525 ************************************ 00:07:54.525 END TEST dd_sparse_file_to_bdev 00:07:54.525 ************************************ 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:54.783 ************************************ 00:07:54.783 START TEST dd_sparse_bdev_to_file 00:07:54.783 ************************************ 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:54.783 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:54.783 [2024-07-15 22:38:10.174809] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:54.783 [2024-07-15 22:38:10.174924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64408 ] 00:07:54.783 { 00:07:54.783 "subsystems": [ 00:07:54.783 { 00:07:54.783 "subsystem": "bdev", 00:07:54.783 "config": [ 00:07:54.783 { 00:07:54.783 "params": { 00:07:54.783 "block_size": 4096, 00:07:54.783 "filename": "dd_sparse_aio_disk", 00:07:54.783 "name": "dd_aio" 00:07:54.783 }, 00:07:54.784 "method": "bdev_aio_create" 00:07:54.784 }, 00:07:54.784 { 00:07:54.784 "method": "bdev_wait_for_examine" 00:07:54.784 } 00:07:54.784 ] 00:07:54.784 } 00:07:54.784 ] 00:07:54.784 } 00:07:54.784 [2024-07-15 22:38:10.313326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.042 [2024-07-15 22:38:10.433543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.042 [2024-07-15 22:38:10.491471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.300  Copying: 12/36 [MB] (average 1000 MBps) 00:07:55.300 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:55.300 00:07:55.300 real 0m0.724s 00:07:55.300 user 0m0.475s 00:07:55.300 sys 0m0.354s 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.300 22:38:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:55.300 ************************************ 00:07:55.300 END TEST dd_sparse_bdev_to_file 00:07:55.300 ************************************ 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:55.560 00:07:55.560 real 0m2.546s 00:07:55.560 user 0m1.544s 00:07:55.560 sys 0m1.317s 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.560 22:38:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:55.560 ************************************ 00:07:55.560 END TEST spdk_dd_sparse 00:07:55.560 ************************************ 00:07:55.560 22:38:10 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:55.560 22:38:10 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:55.560 22:38:10 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.560 22:38:10 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.560 22:38:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:55.560 ************************************ 00:07:55.560 START TEST spdk_dd_negative 00:07:55.560 ************************************ 00:07:55.560 22:38:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:55.560 * Looking for test storage... 00:07:55.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:55.560 ************************************ 00:07:55.560 START TEST dd_invalid_arguments 00:07:55.560 ************************************ 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.560 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:55.560 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:55.560 00:07:55.560 CPU options: 00:07:55.560 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:55.560 (like [0,1,10]) 00:07:55.560 --lcores lcore to CPU mapping list. The list is in the format: 00:07:55.560 [<,lcores[@CPUs]>...] 00:07:55.560 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:55.560 Within the group, '-' is used for range separator, 00:07:55.560 ',' is used for single number separator. 00:07:55.560 '( )' can be omitted for single element group, 00:07:55.560 '@' can be omitted if cpus and lcores have the same value 00:07:55.560 --disable-cpumask-locks Disable CPU core lock files. 00:07:55.560 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:55.560 pollers in the app support interrupt mode) 00:07:55.560 -p, --main-core main (primary) core for DPDK 00:07:55.560 00:07:55.560 Configuration options: 00:07:55.560 -c, --config, --json JSON config file 00:07:55.560 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:55.560 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:55.560 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:55.560 --rpcs-allowed comma-separated list of permitted RPCS 00:07:55.560 --json-ignore-init-errors don't exit on invalid config entry 00:07:55.560 00:07:55.560 Memory options: 00:07:55.560 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:55.561 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:55.561 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:55.561 -R, --huge-unlink unlink huge files after initialization 00:07:55.561 -n, --mem-channels number of memory channels used for DPDK 00:07:55.561 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:55.561 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:55.561 --no-huge run without using hugepages 00:07:55.561 -i, --shm-id shared memory ID (optional) 00:07:55.561 -g, --single-file-segments force creating just one hugetlbfs file 00:07:55.561 00:07:55.561 PCI options: 00:07:55.561 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:55.561 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:55.561 -u, --no-pci disable PCI access 00:07:55.561 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:55.561 00:07:55.561 Log options: 00:07:55.561 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:55.561 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:55.561 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:55.561 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:55.561 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:55.561 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:55.561 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:55.561 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:55.561 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:55.561 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:55.561 virtio_vfio_user, vmd) 00:07:55.561 --silence-noticelog disable notice level logging to stderr 00:07:55.561 00:07:55.561 Trace options: 00:07:55.561 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:55.561 setting 0 to disable trace (default 32768) 00:07:55.561 Tracepoints vary in size and can use more than one trace entry. 00:07:55.561 -e, --tpoint-group [:] 00:07:55.561 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:55.561 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:55.561 [2024-07-15 22:38:11.095763] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:55.561 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:55.561 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:55.561 a tracepoint group. First tpoint inside a group can be enabled by 00:07:55.561 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:55.561 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:55.561 in /include/spdk_internal/trace_defs.h 00:07:55.561 00:07:55.561 Other options: 00:07:55.561 -h, --help show this usage 00:07:55.561 -v, --version print SPDK version 00:07:55.561 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:55.561 --env-context Opaque context for use of the env implementation 00:07:55.561 00:07:55.561 Application specific: 00:07:55.561 [--------- DD Options ---------] 00:07:55.561 --if Input file. Must specify either --if or --ib. 00:07:55.561 --ib Input bdev. Must specifier either --if or --ib 00:07:55.561 --of Output file. Must specify either --of or --ob. 00:07:55.561 --ob Output bdev. Must specify either --of or --ob. 00:07:55.561 --iflag Input file flags. 00:07:55.561 --oflag Output file flags. 00:07:55.561 --bs I/O unit size (default: 4096) 00:07:55.561 --qd Queue depth (default: 2) 00:07:55.561 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:55.561 --skip Skip this many I/O units at start of input. (default: 0) 00:07:55.561 --seek Skip this many I/O units at start of output. (default: 0) 00:07:55.561 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:55.561 --sparse Enable hole skipping in input target 00:07:55.561 Available iflag and oflag values: 00:07:55.561 append - append mode 00:07:55.561 direct - use direct I/O for data 00:07:55.561 directory - fail unless a directory 00:07:55.561 dsync - use synchronized I/O for data 00:07:55.561 noatime - do not update access time 00:07:55.561 noctty - do not assign controlling terminal from file 00:07:55.561 nofollow - do not follow symlinks 00:07:55.561 nonblock - use non-blocking I/O 00:07:55.561 sync - use synchronized I/O for data and metadata 00:07:55.561 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:55.561 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.561 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:55.561 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.561 00:07:55.561 real 0m0.064s 00:07:55.561 user 0m0.045s 00:07:55.561 sys 0m0.018s 00:07:55.561 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.561 22:38:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:55.561 ************************************ 00:07:55.561 END TEST dd_invalid_arguments 00:07:55.561 ************************************ 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:55.820 ************************************ 00:07:55.820 START TEST dd_double_input 00:07:55.820 ************************************ 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:55.820 [2024-07-15 22:38:11.207438] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.820 00:07:55.820 real 0m0.064s 00:07:55.820 user 0m0.035s 00:07:55.820 sys 0m0.028s 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:55.820 ************************************ 00:07:55.820 END TEST dd_double_input 00:07:55.820 ************************************ 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:55.820 ************************************ 00:07:55.820 START TEST dd_double_output 00:07:55.820 ************************************ 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:55.820 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:55.821 [2024-07-15 22:38:11.332228] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.821 00:07:55.821 real 0m0.078s 00:07:55.821 user 0m0.051s 00:07:55.821 sys 0m0.027s 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.821 22:38:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:55.821 ************************************ 00:07:55.821 END TEST dd_double_output 00:07:55.821 ************************************ 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:56.080 ************************************ 00:07:56.080 START TEST dd_no_input 00:07:56.080 ************************************ 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:56.080 [2024-07-15 22:38:11.454084] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.080 00:07:56.080 real 0m0.073s 00:07:56.080 user 0m0.048s 00:07:56.080 sys 0m0.024s 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.080 ************************************ 00:07:56.080 END TEST dd_no_input 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:56.080 ************************************ 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:56.080 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:56.081 ************************************ 00:07:56.081 START TEST dd_no_output 00:07:56.081 ************************************ 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.081 [2024-07-15 22:38:11.575197] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.081 00:07:56.081 real 0m0.079s 00:07:56.081 user 0m0.044s 00:07:56.081 sys 0m0.033s 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:56.081 ************************************ 00:07:56.081 END TEST dd_no_output 00:07:56.081 ************************************ 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:56.081 ************************************ 00:07:56.081 START TEST dd_wrong_blocksize 00:07:56.081 ************************************ 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.081 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:56.339 [2024-07-15 22:38:11.699769] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.339 00:07:56.339 real 0m0.074s 00:07:56.339 user 0m0.044s 00:07:56.339 sys 0m0.029s 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.339 ************************************ 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:56.339 END TEST dd_wrong_blocksize 00:07:56.339 ************************************ 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:56.339 ************************************ 00:07:56.339 START TEST dd_smaller_blocksize 00:07:56.339 ************************************ 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.339 22:38:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:56.339 [2024-07-15 22:38:11.814794] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:56.339 [2024-07-15 22:38:11.814887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64626 ] 00:07:56.597 [2024-07-15 22:38:11.950544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.598 [2024-07-15 22:38:12.078644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.598 [2024-07-15 22:38:12.138912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.164 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:57.164 [2024-07-15 22:38:12.488691] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:57.164 [2024-07-15 22:38:12.488763] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.164 [2024-07-15 22:38:12.608473] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.164 00:07:57.164 real 0m0.946s 00:07:57.164 user 0m0.431s 00:07:57.164 sys 0m0.407s 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.164 22:38:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:57.164 ************************************ 00:07:57.164 END TEST dd_smaller_blocksize 00:07:57.164 ************************************ 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.423 ************************************ 00:07:57.423 START TEST dd_invalid_count 00:07:57.423 ************************************ 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:57.423 [2024-07-15 22:38:12.814737] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.423 00:07:57.423 real 0m0.075s 00:07:57.423 user 0m0.047s 00:07:57.423 sys 0m0.026s 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:57.423 ************************************ 00:07:57.423 END TEST dd_invalid_count 00:07:57.423 ************************************ 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.423 ************************************ 00:07:57.423 START TEST dd_invalid_oflag 00:07:57.423 ************************************ 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:57.423 [2024-07-15 22:38:12.939442] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.423 00:07:57.423 real 0m0.080s 00:07:57.423 user 0m0.047s 00:07:57.423 sys 0m0.032s 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.423 22:38:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:57.423 ************************************ 00:07:57.423 END TEST dd_invalid_oflag 00:07:57.423 ************************************ 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.682 ************************************ 00:07:57.682 START TEST dd_invalid_iflag 00:07:57.682 ************************************ 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:57.682 [2024-07-15 22:38:13.072008] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.682 00:07:57.682 real 0m0.075s 00:07:57.682 user 0m0.047s 00:07:57.682 sys 0m0.026s 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:57.682 ************************************ 00:07:57.682 END TEST dd_invalid_iflag 00:07:57.682 ************************************ 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:57.682 ************************************ 00:07:57.682 START TEST dd_unknown_flag 00:07:57.682 ************************************ 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.682 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:57.682 [2024-07-15 22:38:13.194951] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:57.682 [2024-07-15 22:38:13.195062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64724 ] 00:07:57.948 [2024-07-15 22:38:13.335003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.948 [2024-07-15 22:38:13.448745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.948 [2024-07-15 22:38:13.502927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.210 [2024-07-15 22:38:13.538417] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:58.210 [2024-07-15 22:38:13.538487] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.210 [2024-07-15 22:38:13.538548] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:58.210 [2024-07-15 22:38:13.538575] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.210 [2024-07-15 22:38:13.538805] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:58.210 [2024-07-15 22:38:13.538834] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.210 [2024-07-15 22:38:13.538885] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:58.210 [2024-07-15 22:38:13.538896] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:58.210 [2024-07-15 22:38:13.656666] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.210 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:58.210 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.210 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:58.210 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:58.211 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:58.211 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.211 00:07:58.211 real 0m0.626s 00:07:58.211 user 0m0.375s 00:07:58.211 sys 0m0.158s 00:07:58.211 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.211 22:38:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:58.211 ************************************ 00:07:58.211 END TEST dd_unknown_flag 00:07:58.211 ************************************ 00:07:58.470 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:58.470 22:38:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:58.470 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.470 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.470 22:38:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.470 ************************************ 00:07:58.470 START TEST dd_invalid_json 00:07:58.470 ************************************ 00:07:58.470 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.471 22:38:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:58.471 [2024-07-15 22:38:13.875255] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:58.471 [2024-07-15 22:38:13.875369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64752 ] 00:07:58.471 [2024-07-15 22:38:14.010428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.737 [2024-07-15 22:38:14.134678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.737 [2024-07-15 22:38:14.134756] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:58.737 [2024-07-15 22:38:14.134777] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:58.737 [2024-07-15 22:38:14.134790] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.737 [2024-07-15 22:38:14.134834] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.737 00:07:58.737 real 0m0.436s 00:07:58.737 user 0m0.253s 00:07:58.737 sys 0m0.080s 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.737 ************************************ 00:07:58.737 END TEST dd_invalid_json 00:07:58.737 ************************************ 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:58.737 00:07:58.737 real 0m3.334s 00:07:58.737 user 0m1.697s 00:07:58.737 sys 0m1.298s 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.737 ************************************ 00:07:58.737 END TEST spdk_dd_negative 00:07:58.737 22:38:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:58.737 ************************************ 00:07:58.996 22:38:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:58.996 00:07:58.996 real 1m20.296s 00:07:58.996 user 0m52.797s 00:07:58.996 sys 0m33.977s 00:07:58.996 22:38:14 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.996 22:38:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 ************************************ 00:07:58.996 END TEST spdk_dd 00:07:58.996 ************************************ 00:07:58.996 22:38:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:58.996 22:38:14 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:58.996 22:38:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.996 22:38:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 22:38:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:58.996 22:38:14 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:58.996 22:38:14 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.996 22:38:14 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.996 22:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.996 22:38:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 ************************************ 00:07:58.996 START TEST nvmf_tcp 00:07:58.996 ************************************ 00:07:58.996 22:38:14 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.996 * Looking for test storage... 00:07:58.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.996 22:38:14 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.996 22:38:14 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.996 22:38:14 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.996 22:38:14 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.996 22:38:14 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.996 22:38:14 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.996 22:38:14 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:58.996 22:38:14 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:58.996 22:38:14 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.996 22:38:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:58.996 22:38:14 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:58.996 22:38:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.996 22:38:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.996 22:38:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.996 ************************************ 00:07:58.996 START TEST nvmf_host_management 00:07:58.996 ************************************ 00:07:58.996 22:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:59.255 * Looking for test storage... 00:07:59.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:59.255 Cannot find device "nvmf_init_br" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:59.255 Cannot find device "nvmf_tgt_br" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.255 Cannot find device "nvmf_tgt_br2" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:59.255 Cannot find device "nvmf_init_br" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:59.255 Cannot find device "nvmf_tgt_br" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:59.255 Cannot find device "nvmf_tgt_br2" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:59.255 Cannot find device "nvmf_br" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:59.255 Cannot find device "nvmf_init_if" 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:59.255 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:59.256 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:59.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:07:59.514 00:07:59.514 --- 10.0.0.2 ping statistics --- 00:07:59.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.514 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:59.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:59.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:59.514 00:07:59.514 --- 10.0.0.3 ping statistics --- 00:07:59.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.514 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:59.514 22:38:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:59.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:59.514 00:07:59.514 --- 10.0.0.1 ping statistics --- 00:07:59.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.514 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65013 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65013 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65013 ']' 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.514 22:38:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.771 [2024-07-15 22:38:15.085423] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:07:59.772 [2024-07-15 22:38:15.085551] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.772 [2024-07-15 22:38:15.225869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.028 [2024-07-15 22:38:15.360623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.028 [2024-07-15 22:38:15.360849] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.028 [2024-07-15 22:38:15.360981] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.028 [2024-07-15 22:38:15.361077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.028 [2024-07-15 22:38:15.361181] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.028 [2024-07-15 22:38:15.361436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.028 [2024-07-15 22:38:15.362062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.028 [2024-07-15 22:38:15.362152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.028 [2024-07-15 22:38:15.362160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.028 [2024-07-15 22:38:15.418999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 [2024-07-15 22:38:16.206099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 Malloc0 00:08:00.962 [2024-07-15 22:38:16.287997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65081 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65081 /var/tmp/bdevperf.sock 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65081 ']' 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:00.962 { 00:08:00.962 "params": { 00:08:00.962 "name": "Nvme$subsystem", 00:08:00.962 "trtype": "$TEST_TRANSPORT", 00:08:00.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.962 "adrfam": "ipv4", 00:08:00.962 "trsvcid": "$NVMF_PORT", 00:08:00.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.962 "hdgst": ${hdgst:-false}, 00:08:00.962 "ddgst": ${ddgst:-false} 00:08:00.962 }, 00:08:00.962 "method": "bdev_nvme_attach_controller" 00:08:00.962 } 00:08:00.962 EOF 00:08:00.962 )") 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:00.962 22:38:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:00.962 "params": { 00:08:00.962 "name": "Nvme0", 00:08:00.962 "trtype": "tcp", 00:08:00.962 "traddr": "10.0.0.2", 00:08:00.962 "adrfam": "ipv4", 00:08:00.962 "trsvcid": "4420", 00:08:00.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:00.962 "hdgst": false, 00:08:00.962 "ddgst": false 00:08:00.962 }, 00:08:00.962 "method": "bdev_nvme_attach_controller" 00:08:00.962 }' 00:08:00.962 [2024-07-15 22:38:16.392703] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:00.962 [2024-07-15 22:38:16.392810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65081 ] 00:08:01.219 [2024-07-15 22:38:16.531472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.219 [2024-07-15 22:38:16.670604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.219 [2024-07-15 22:38:16.737327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.476 Running I/O for 10 seconds... 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.042 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.042 [2024-07-15 22:38:17.494049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.042 [2024-07-15 22:38:17.494394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9580 is same with the state(5) to be set 00:08:02.043 [2024-07-15 22:38:17.494745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.494981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.494990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.043 [2024-07-15 22:38:17.495161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.043 [2024-07-15 22:38:17.495172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.044 [2024-07-15 22:38:17.495834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.044 [2024-07-15 22:38:17.495850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.495982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.495990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.045 [2024-07-15 22:38:17.496134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.496144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30ad0 is same with the state(5) to be set 00:08:02.045 [2024-07-15 22:38:17.496212] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d30ad0 was disconnected and freed. reset controller. 00:08:02.045 [2024-07-15 22:38:17.497488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:02.045 task offset: 114688 on job bdev=Nvme0n1 fails 00:08:02.045 00:08:02.045 Latency(us) 00:08:02.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.045 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:02.045 Job: Nvme0n1 ended in about 0.63 seconds with error 00:08:02.045 Verification LBA range: start 0x0 length 0x400 00:08:02.045 Nvme0n1 : 0.63 1412.80 88.30 100.91 0.00 41119.77 2859.75 39798.23 00:08:02.045 =================================================================================================================== 00:08:02.045 Total : 1412.80 88.30 100.91 0.00 41119.77 2859.75 39798.23 00:08:02.045 [2024-07-15 22:38:17.499823] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.045 [2024-07-15 22:38:17.499864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d288d0 (9): Bad file descriptor 00:08:02.045 [2024-07-15 22:38:17.502013] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:02.045 [2024-07-15 22:38:17.502216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:02.045 [2024-07-15 22:38:17.502247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:02.045 [2024-07-15 22:38:17.502264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:02.045 [2024-07-15 22:38:17.502275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:02.045 [2024-07-15 22:38:17.502285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:02.045 [2024-07-15 22:38:17.502294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d288d0 00:08:02.045 [2024-07-15 22:38:17.502328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d288d0 (9): Bad file descriptor 00:08:02.045 [2024-07-15 22:38:17.502346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:02.045 [2024-07-15 22:38:17.502356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:02.045 [2024-07-15 22:38:17.502367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:02.045 [2024-07-15 22:38:17.502384] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:02.045 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.045 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:02.045 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.045 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:02.045 22:38:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.045 22:38:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65081 00:08:02.978 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65081) - No such process 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:02.978 { 00:08:02.978 "params": { 00:08:02.978 "name": "Nvme$subsystem", 00:08:02.978 "trtype": "$TEST_TRANSPORT", 00:08:02.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.978 "adrfam": "ipv4", 00:08:02.978 "trsvcid": "$NVMF_PORT", 00:08:02.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.978 "hdgst": ${hdgst:-false}, 00:08:02.978 "ddgst": ${ddgst:-false} 00:08:02.978 }, 00:08:02.978 "method": "bdev_nvme_attach_controller" 00:08:02.978 } 00:08:02.978 EOF 00:08:02.978 )") 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:02.978 22:38:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:02.978 "params": { 00:08:02.978 "name": "Nvme0", 00:08:02.978 "trtype": "tcp", 00:08:02.978 "traddr": "10.0.0.2", 00:08:02.978 "adrfam": "ipv4", 00:08:02.978 "trsvcid": "4420", 00:08:02.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:02.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:02.978 "hdgst": false, 00:08:02.978 "ddgst": false 00:08:02.978 }, 00:08:02.978 "method": "bdev_nvme_attach_controller" 00:08:02.978 }' 00:08:03.235 [2024-07-15 22:38:18.572472] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:03.235 [2024-07-15 22:38:18.572582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65119 ] 00:08:03.235 [2024-07-15 22:38:18.711235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.493 [2024-07-15 22:38:18.836256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.493 [2024-07-15 22:38:18.902406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.493 Running I/O for 1 seconds... 00:08:04.867 00:08:04.867 Latency(us) 00:08:04.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.867 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:04.867 Verification LBA range: start 0x0 length 0x400 00:08:04.867 Nvme0n1 : 1.01 1522.27 95.14 0.00 0.00 41138.90 4319.42 40989.79 00:08:04.867 =================================================================================================================== 00:08:04.867 Total : 1522.27 95.14 0.00 0.00 41138.90 4319.42 40989.79 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.867 rmmod nvme_tcp 00:08:04.867 rmmod nvme_fabrics 00:08:04.867 rmmod nvme_keyring 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65013 ']' 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65013 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65013 ']' 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65013 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65013 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:04.867 killing process with pid 65013 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65013' 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65013 00:08:04.867 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65013 00:08:05.126 [2024-07-15 22:38:20.637357] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.126 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.385 22:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:05.385 22:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:05.385 00:08:05.385 real 0m6.176s 00:08:05.385 user 0m23.998s 00:08:05.385 sys 0m1.541s 00:08:05.385 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.385 22:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.385 ************************************ 00:08:05.385 END TEST nvmf_host_management 00:08:05.385 ************************************ 00:08:05.385 22:38:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:05.385 22:38:20 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:05.385 22:38:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.385 22:38:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.385 22:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.385 ************************************ 00:08:05.385 START TEST nvmf_lvol 00:08:05.385 ************************************ 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:05.385 * Looking for test storage... 00:08:05.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.385 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:05.386 Cannot find device "nvmf_tgt_br" 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.386 Cannot find device "nvmf_tgt_br2" 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:05.386 Cannot find device "nvmf_tgt_br" 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:05.386 Cannot find device "nvmf_tgt_br2" 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:05.386 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:05.653 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:05.653 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.653 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:05.653 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.653 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:05.653 22:38:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:05.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:05.653 00:08:05.653 --- 10.0.0.2 ping statistics --- 00:08:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.653 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:05.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:05.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:05.653 00:08:05.653 --- 10.0.0.3 ping statistics --- 00:08:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.653 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:05.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:05.653 00:08:05.653 --- 10.0.0.1 ping statistics --- 00:08:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.653 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65331 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65331 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65331 ']' 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.653 22:38:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.911 [2024-07-15 22:38:21.254503] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:05.911 [2024-07-15 22:38:21.254632] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.911 [2024-07-15 22:38:21.390874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:06.169 [2024-07-15 22:38:21.502328] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.169 [2024-07-15 22:38:21.502654] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.169 [2024-07-15 22:38:21.502789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.169 [2024-07-15 22:38:21.502918] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.169 [2024-07-15 22:38:21.502954] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.169 [2024-07-15 22:38:21.503203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.169 [2024-07-15 22:38:21.503334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.169 [2024-07-15 22:38:21.503413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.169 [2024-07-15 22:38:21.557591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.104 [2024-07-15 22:38:22.612550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.104 22:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.669 22:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:07.669 22:38:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.669 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:07.669 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:08.232 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:08.232 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0a6826b0-e232-4a87-b94a-b7569ac12d00 00:08:08.232 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0a6826b0-e232-4a87-b94a-b7569ac12d00 lvol 20 00:08:08.489 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=df64d2cc-f445-4a1e-b650-ca7c990f7bef 00:08:08.489 22:38:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.747 22:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df64d2cc-f445-4a1e-b650-ca7c990f7bef 00:08:09.005 22:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.270 [2024-07-15 22:38:24.769309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.270 22:38:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.528 22:38:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:09.528 22:38:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65408 00:08:09.528 22:38:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:10.903 22:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot df64d2cc-f445-4a1e-b650-ca7c990f7bef MY_SNAPSHOT 00:08:10.903 22:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d3a6f68c-e9f3-4c3b-b112-ee66aca0a2b7 00:08:10.903 22:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize df64d2cc-f445-4a1e-b650-ca7c990f7bef 30 00:08:11.470 22:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d3a6f68c-e9f3-4c3b-b112-ee66aca0a2b7 MY_CLONE 00:08:11.728 22:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=26812b70-1f13-45ee-a058-a7955fc11cb5 00:08:11.728 22:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 26812b70-1f13-45ee-a058-a7955fc11cb5 00:08:11.986 22:38:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65408 00:08:20.102 Initializing NVMe Controllers 00:08:20.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.102 Controller IO queue size 128, less than required. 00:08:20.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.102 Initialization complete. Launching workers. 00:08:20.102 ======================================================== 00:08:20.102 Latency(us) 00:08:20.102 Device Information : IOPS MiB/s Average min max 00:08:20.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8565.40 33.46 14943.34 1146.30 87454.60 00:08:20.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8429.60 32.93 15192.93 2553.02 51631.10 00:08:20.102 ======================================================== 00:08:20.102 Total : 16995.00 66.39 15067.14 1146.30 87454.60 00:08:20.102 00:08:20.102 22:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.102 22:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete df64d2cc-f445-4a1e-b650-ca7c990f7bef 00:08:20.359 22:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a6826b0-e232-4a87-b94a-b7569ac12d00 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.925 rmmod nvme_tcp 00:08:20.925 rmmod nvme_fabrics 00:08:20.925 rmmod nvme_keyring 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65331 ']' 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65331 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65331 ']' 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65331 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65331 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.925 killing process with pid 65331 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65331' 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65331 00:08:20.925 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65331 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:21.183 00:08:21.183 real 0m15.902s 00:08:21.183 user 1m6.079s 00:08:21.183 sys 0m4.382s 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.183 22:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.183 ************************************ 00:08:21.183 END TEST nvmf_lvol 00:08:21.183 ************************************ 00:08:21.183 22:38:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:21.183 22:38:36 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.183 22:38:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:21.183 22:38:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.183 22:38:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.183 ************************************ 00:08:21.184 START TEST nvmf_lvs_grow 00:08:21.184 ************************************ 00:08:21.184 22:38:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:21.442 * Looking for test storage... 00:08:21.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:21.442 Cannot find device "nvmf_tgt_br" 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.442 Cannot find device "nvmf_tgt_br2" 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:21.442 Cannot find device "nvmf_tgt_br" 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:21.442 Cannot find device "nvmf_tgt_br2" 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:21.442 22:38:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:21.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:21.700 00:08:21.700 --- 10.0.0.2 ping statistics --- 00:08:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.700 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:21.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:21.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:21.700 00:08:21.700 --- 10.0.0.3 ping statistics --- 00:08:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.700 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:21.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:21.700 00:08:21.700 --- 10.0.0.1 ping statistics --- 00:08:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.700 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65731 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65731 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65731 ']' 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.700 22:38:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.700 [2024-07-15 22:38:37.241732] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:21.700 [2024-07-15 22:38:37.242048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.959 [2024-07-15 22:38:37.382244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.959 [2024-07-15 22:38:37.508971] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.959 [2024-07-15 22:38:37.509058] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.959 [2024-07-15 22:38:37.509086] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.959 [2024-07-15 22:38:37.509096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.959 [2024-07-15 22:38:37.509105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.959 [2024-07-15 22:38:37.509142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.217 [2024-07-15 22:38:37.569209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.785 22:38:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:23.044 [2024-07-15 22:38:38.603263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.303 ************************************ 00:08:23.303 START TEST lvs_grow_clean 00:08:23.303 ************************************ 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:23.303 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.561 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:23.561 22:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:23.820 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:23.820 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:23.820 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.078 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.078 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.078 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f lvol 150 00:08:24.335 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ae23708c-32a1-462a-9a33-4c5b31ab3315 00:08:24.336 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.336 22:38:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:24.593 [2024-07-15 22:38:39.985461] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:24.593 [2024-07-15 22:38:39.985556] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:24.593 true 00:08:24.593 22:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:24.593 22:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:24.851 22:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:24.851 22:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.109 22:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae23708c-32a1-462a-9a33-4c5b31ab3315 00:08:25.368 22:38:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:25.627 [2024-07-15 22:38:41.030000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.627 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65818 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65818 /var/tmp/bdevperf.sock 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65818 ']' 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.886 22:38:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:25.886 [2024-07-15 22:38:41.367152] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:25.886 [2024-07-15 22:38:41.367263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65818 ] 00:08:26.146 [2024-07-15 22:38:41.503081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.146 [2024-07-15 22:38:41.617889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.146 [2024-07-15 22:38:41.671709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.081 22:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.081 22:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:27.081 22:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.383 Nvme0n1 00:08:27.383 22:38:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.642 [ 00:08:27.642 { 00:08:27.642 "name": "Nvme0n1", 00:08:27.642 "aliases": [ 00:08:27.642 "ae23708c-32a1-462a-9a33-4c5b31ab3315" 00:08:27.642 ], 00:08:27.642 "product_name": "NVMe disk", 00:08:27.642 "block_size": 4096, 00:08:27.642 "num_blocks": 38912, 00:08:27.642 "uuid": "ae23708c-32a1-462a-9a33-4c5b31ab3315", 00:08:27.642 "assigned_rate_limits": { 00:08:27.642 "rw_ios_per_sec": 0, 00:08:27.642 "rw_mbytes_per_sec": 0, 00:08:27.642 "r_mbytes_per_sec": 0, 00:08:27.642 "w_mbytes_per_sec": 0 00:08:27.642 }, 00:08:27.642 "claimed": false, 00:08:27.642 "zoned": false, 00:08:27.642 "supported_io_types": { 00:08:27.642 "read": true, 00:08:27.642 "write": true, 00:08:27.642 "unmap": true, 00:08:27.642 "flush": true, 00:08:27.642 "reset": true, 00:08:27.642 "nvme_admin": true, 00:08:27.642 "nvme_io": true, 00:08:27.642 "nvme_io_md": false, 00:08:27.642 "write_zeroes": true, 00:08:27.642 "zcopy": false, 00:08:27.642 "get_zone_info": false, 00:08:27.642 "zone_management": false, 00:08:27.642 "zone_append": false, 00:08:27.642 "compare": true, 00:08:27.642 "compare_and_write": true, 00:08:27.642 "abort": true, 00:08:27.642 "seek_hole": false, 00:08:27.642 "seek_data": false, 00:08:27.642 "copy": true, 00:08:27.642 "nvme_iov_md": false 00:08:27.642 }, 00:08:27.642 "memory_domains": [ 00:08:27.642 { 00:08:27.642 "dma_device_id": "system", 00:08:27.642 "dma_device_type": 1 00:08:27.642 } 00:08:27.642 ], 00:08:27.642 "driver_specific": { 00:08:27.642 "nvme": [ 00:08:27.642 { 00:08:27.642 "trid": { 00:08:27.642 "trtype": "TCP", 00:08:27.642 "adrfam": "IPv4", 00:08:27.642 "traddr": "10.0.0.2", 00:08:27.642 "trsvcid": "4420", 00:08:27.642 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.642 }, 00:08:27.642 "ctrlr_data": { 00:08:27.642 "cntlid": 1, 00:08:27.642 "vendor_id": "0x8086", 00:08:27.642 "model_number": "SPDK bdev Controller", 00:08:27.642 "serial_number": "SPDK0", 00:08:27.642 "firmware_revision": "24.09", 00:08:27.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.642 "oacs": { 00:08:27.642 "security": 0, 00:08:27.642 "format": 0, 00:08:27.642 "firmware": 0, 00:08:27.642 "ns_manage": 0 00:08:27.642 }, 00:08:27.642 "multi_ctrlr": true, 00:08:27.642 "ana_reporting": false 00:08:27.642 }, 00:08:27.642 "vs": { 00:08:27.642 "nvme_version": "1.3" 00:08:27.642 }, 00:08:27.642 "ns_data": { 00:08:27.642 "id": 1, 00:08:27.642 "can_share": true 00:08:27.642 } 00:08:27.642 } 00:08:27.642 ], 00:08:27.642 "mp_policy": "active_passive" 00:08:27.642 } 00:08:27.642 } 00:08:27.642 ] 00:08:27.642 22:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.642 22:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65847 00:08:27.642 22:38:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.642 Running I/O for 10 seconds... 00:08:29.020 Latency(us) 00:08:29.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.020 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:29.020 =================================================================================================================== 00:08:29.021 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:29.021 00:08:29.587 22:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:29.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.587 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:29.587 =================================================================================================================== 00:08:29.587 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:29.587 00:08:29.859 true 00:08:29.859 22:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:29.859 22:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:30.118 22:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:30.118 22:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:30.118 22:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65847 00:08:30.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.685 Nvme0n1 : 3.00 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:08:30.685 =================================================================================================================== 00:08:30.685 Total : 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:08:30.685 00:08:31.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.683 Nvme0n1 : 4.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:31.683 =================================================================================================================== 00:08:31.683 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:31.683 00:08:32.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.621 Nvme0n1 : 5.00 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:08:32.621 =================================================================================================================== 00:08:32.621 Total : 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:08:32.621 00:08:33.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.997 Nvme0n1 : 6.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:33.997 =================================================================================================================== 00:08:33.997 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:33.997 00:08:34.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.932 Nvme0n1 : 7.00 7075.71 27.64 0.00 0.00 0.00 0.00 0.00 00:08:34.932 =================================================================================================================== 00:08:34.932 Total : 7075.71 27.64 0.00 0.00 0.00 0.00 0.00 00:08:34.932 00:08:35.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.868 Nvme0n1 : 8.00 7032.62 27.47 0.00 0.00 0.00 0.00 0.00 00:08:35.868 =================================================================================================================== 00:08:35.868 Total : 7032.62 27.47 0.00 0.00 0.00 0.00 0.00 00:08:35.868 00:08:36.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.804 Nvme0n1 : 9.00 7013.22 27.40 0.00 0.00 0.00 0.00 0.00 00:08:36.804 =================================================================================================================== 00:08:36.804 Total : 7013.22 27.40 0.00 0.00 0.00 0.00 0.00 00:08:36.804 00:08:37.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.738 Nvme0n1 : 10.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:37.738 =================================================================================================================== 00:08:37.738 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:37.738 00:08:37.738 00:08:37.738 Latency(us) 00:08:37.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.738 Nvme0n1 : 10.01 6989.69 27.30 0.00 0.00 18306.23 15371.17 45756.04 00:08:37.738 =================================================================================================================== 00:08:37.738 Total : 6989.69 27.30 0.00 0.00 18306.23 15371.17 45756.04 00:08:37.738 0 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65818 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65818 ']' 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65818 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65818 00:08:37.738 killing process with pid 65818 00:08:37.738 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.738 00:08:37.738 Latency(us) 00:08:37.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.738 =================================================================================================================== 00:08:37.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65818' 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65818 00:08:37.738 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65818 00:08:37.997 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.254 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.512 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:38.512 22:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.770 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.770 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:38.770 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.028 [2024-07-15 22:38:54.459396] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:39.028 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:39.285 request: 00:08:39.285 { 00:08:39.285 "uuid": "7e085b06-6df3-4ee1-8ecb-e8d42f95374f", 00:08:39.285 "method": "bdev_lvol_get_lvstores", 00:08:39.285 "req_id": 1 00:08:39.285 } 00:08:39.285 Got JSON-RPC error response 00:08:39.285 response: 00:08:39.285 { 00:08:39.285 "code": -19, 00:08:39.285 "message": "No such device" 00:08:39.285 } 00:08:39.285 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:39.285 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:39.285 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:39.285 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:39.285 22:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.554 aio_bdev 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ae23708c-32a1-462a-9a33-4c5b31ab3315 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=ae23708c-32a1-462a-9a33-4c5b31ab3315 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:39.554 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.822 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae23708c-32a1-462a-9a33-4c5b31ab3315 -t 2000 00:08:40.080 [ 00:08:40.080 { 00:08:40.080 "name": "ae23708c-32a1-462a-9a33-4c5b31ab3315", 00:08:40.080 "aliases": [ 00:08:40.080 "lvs/lvol" 00:08:40.080 ], 00:08:40.080 "product_name": "Logical Volume", 00:08:40.080 "block_size": 4096, 00:08:40.080 "num_blocks": 38912, 00:08:40.080 "uuid": "ae23708c-32a1-462a-9a33-4c5b31ab3315", 00:08:40.080 "assigned_rate_limits": { 00:08:40.080 "rw_ios_per_sec": 0, 00:08:40.080 "rw_mbytes_per_sec": 0, 00:08:40.080 "r_mbytes_per_sec": 0, 00:08:40.080 "w_mbytes_per_sec": 0 00:08:40.080 }, 00:08:40.080 "claimed": false, 00:08:40.080 "zoned": false, 00:08:40.080 "supported_io_types": { 00:08:40.080 "read": true, 00:08:40.080 "write": true, 00:08:40.080 "unmap": true, 00:08:40.080 "flush": false, 00:08:40.080 "reset": true, 00:08:40.080 "nvme_admin": false, 00:08:40.080 "nvme_io": false, 00:08:40.080 "nvme_io_md": false, 00:08:40.080 "write_zeroes": true, 00:08:40.080 "zcopy": false, 00:08:40.080 "get_zone_info": false, 00:08:40.080 "zone_management": false, 00:08:40.080 "zone_append": false, 00:08:40.080 "compare": false, 00:08:40.080 "compare_and_write": false, 00:08:40.080 "abort": false, 00:08:40.080 "seek_hole": true, 00:08:40.080 "seek_data": true, 00:08:40.080 "copy": false, 00:08:40.080 "nvme_iov_md": false 00:08:40.081 }, 00:08:40.081 "driver_specific": { 00:08:40.081 "lvol": { 00:08:40.081 "lvol_store_uuid": "7e085b06-6df3-4ee1-8ecb-e8d42f95374f", 00:08:40.081 "base_bdev": "aio_bdev", 00:08:40.081 "thin_provision": false, 00:08:40.081 "num_allocated_clusters": 38, 00:08:40.081 "snapshot": false, 00:08:40.081 "clone": false, 00:08:40.081 "esnap_clone": false 00:08:40.081 } 00:08:40.081 } 00:08:40.081 } 00:08:40.081 ] 00:08:40.081 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:40.081 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:40.081 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:40.338 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:40.338 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.338 22:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:40.596 22:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.596 22:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ae23708c-32a1-462a-9a33-4c5b31ab3315 00:08:40.854 22:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e085b06-6df3-4ee1-8ecb-e8d42f95374f 00:08:41.111 22:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.368 22:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:41.932 ************************************ 00:08:41.932 END TEST lvs_grow_clean 00:08:41.932 ************************************ 00:08:41.932 00:08:41.932 real 0m18.637s 00:08:41.932 user 0m17.622s 00:08:41.932 sys 0m2.602s 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.932 ************************************ 00:08:41.932 START TEST lvs_grow_dirty 00:08:41.932 ************************************ 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:41.932 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.189 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.189 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.447 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:42.447 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:42.447 22:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:42.706 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:42.706 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:42.706 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 60f9218a-cb93-48e6-a53a-df00df9705fa lvol 150 00:08:42.965 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:08:42.965 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:42.965 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:43.533 [2024-07-15 22:38:58.867569] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:43.533 [2024-07-15 22:38:58.867663] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:43.533 true 00:08:43.533 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:43.533 22:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.791 22:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.791 22:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.050 22:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:08:44.308 22:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:44.567 [2024-07-15 22:39:00.040169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.567 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66104 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66104 /var/tmp/bdevperf.sock 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66104 ']' 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.825 22:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.084 [2024-07-15 22:39:00.414613] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:45.084 [2024-07-15 22:39:00.415035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66104 ] 00:08:45.084 [2024-07-15 22:39:00.555362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.342 [2024-07-15 22:39:00.684846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.342 [2024-07-15 22:39:00.740036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.909 22:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.909 22:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:45.909 22:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:46.476 Nvme0n1 00:08:46.476 22:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:46.735 [ 00:08:46.735 { 00:08:46.735 "name": "Nvme0n1", 00:08:46.735 "aliases": [ 00:08:46.735 "897a6ff3-a947-4ece-bf4a-fb2370cf816f" 00:08:46.735 ], 00:08:46.735 "product_name": "NVMe disk", 00:08:46.735 "block_size": 4096, 00:08:46.735 "num_blocks": 38912, 00:08:46.735 "uuid": "897a6ff3-a947-4ece-bf4a-fb2370cf816f", 00:08:46.735 "assigned_rate_limits": { 00:08:46.735 "rw_ios_per_sec": 0, 00:08:46.735 "rw_mbytes_per_sec": 0, 00:08:46.735 "r_mbytes_per_sec": 0, 00:08:46.735 "w_mbytes_per_sec": 0 00:08:46.735 }, 00:08:46.735 "claimed": false, 00:08:46.735 "zoned": false, 00:08:46.735 "supported_io_types": { 00:08:46.735 "read": true, 00:08:46.735 "write": true, 00:08:46.735 "unmap": true, 00:08:46.735 "flush": true, 00:08:46.735 "reset": true, 00:08:46.735 "nvme_admin": true, 00:08:46.735 "nvme_io": true, 00:08:46.735 "nvme_io_md": false, 00:08:46.735 "write_zeroes": true, 00:08:46.735 "zcopy": false, 00:08:46.735 "get_zone_info": false, 00:08:46.735 "zone_management": false, 00:08:46.735 "zone_append": false, 00:08:46.735 "compare": true, 00:08:46.735 "compare_and_write": true, 00:08:46.735 "abort": true, 00:08:46.735 "seek_hole": false, 00:08:46.735 "seek_data": false, 00:08:46.735 "copy": true, 00:08:46.735 "nvme_iov_md": false 00:08:46.735 }, 00:08:46.735 "memory_domains": [ 00:08:46.735 { 00:08:46.735 "dma_device_id": "system", 00:08:46.735 "dma_device_type": 1 00:08:46.735 } 00:08:46.735 ], 00:08:46.735 "driver_specific": { 00:08:46.735 "nvme": [ 00:08:46.735 { 00:08:46.735 "trid": { 00:08:46.735 "trtype": "TCP", 00:08:46.735 "adrfam": "IPv4", 00:08:46.735 "traddr": "10.0.0.2", 00:08:46.735 "trsvcid": "4420", 00:08:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:46.735 }, 00:08:46.735 "ctrlr_data": { 00:08:46.735 "cntlid": 1, 00:08:46.735 "vendor_id": "0x8086", 00:08:46.735 "model_number": "SPDK bdev Controller", 00:08:46.735 "serial_number": "SPDK0", 00:08:46.735 "firmware_revision": "24.09", 00:08:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:46.735 "oacs": { 00:08:46.735 "security": 0, 00:08:46.735 "format": 0, 00:08:46.735 "firmware": 0, 00:08:46.735 "ns_manage": 0 00:08:46.735 }, 00:08:46.735 "multi_ctrlr": true, 00:08:46.735 "ana_reporting": false 00:08:46.735 }, 00:08:46.735 "vs": { 00:08:46.735 "nvme_version": "1.3" 00:08:46.735 }, 00:08:46.735 "ns_data": { 00:08:46.735 "id": 1, 00:08:46.735 "can_share": true 00:08:46.735 } 00:08:46.735 } 00:08:46.735 ], 00:08:46.735 "mp_policy": "active_passive" 00:08:46.735 } 00:08:46.735 } 00:08:46.735 ] 00:08:46.735 22:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66133 00:08:46.735 22:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:46.735 22:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:46.994 Running I/O for 10 seconds... 00:08:47.950 Latency(us) 00:08:47.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.950 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:47.950 =================================================================================================================== 00:08:47.950 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:47.950 00:08:48.882 22:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:48.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.882 Nvme0n1 : 2.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:48.882 =================================================================================================================== 00:08:48.882 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:48.882 00:08:49.140 true 00:08:49.140 22:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:49.140 22:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:49.397 22:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:49.397 22:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:49.397 22:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66133 00:08:49.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.964 Nvme0n1 : 3.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:49.964 =================================================================================================================== 00:08:49.964 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:49.964 00:08:50.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.965 Nvme0n1 : 4.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:50.965 =================================================================================================================== 00:08:50.965 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:50.965 00:08:51.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.899 Nvme0n1 : 5.00 7069.80 27.62 0.00 0.00 0.00 0.00 0.00 00:08:51.899 =================================================================================================================== 00:08:51.899 Total : 7069.80 27.62 0.00 0.00 0.00 0.00 0.00 00:08:51.899 00:08:52.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.834 Nvme0n1 : 6.00 7076.83 27.64 0.00 0.00 0.00 0.00 0.00 00:08:52.834 =================================================================================================================== 00:08:52.834 Total : 7076.83 27.64 0.00 0.00 0.00 0.00 0.00 00:08:52.834 00:08:54.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.209 Nvme0n1 : 7.00 7045.57 27.52 0.00 0.00 0.00 0.00 0.00 00:08:54.209 =================================================================================================================== 00:08:54.209 Total : 7045.57 27.52 0.00 0.00 0.00 0.00 0.00 00:08:54.209 00:08:54.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.784 Nvme0n1 : 8.00 7006.25 27.37 0.00 0.00 0.00 0.00 0.00 00:08:54.784 =================================================================================================================== 00:08:54.784 Total : 7006.25 27.37 0.00 0.00 0.00 0.00 0.00 00:08:54.784 00:08:56.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.158 Nvme0n1 : 9.00 6947.44 27.14 0.00 0.00 0.00 0.00 0.00 00:08:56.158 =================================================================================================================== 00:08:56.158 Total : 6947.44 27.14 0.00 0.00 0.00 0.00 0.00 00:08:56.158 00:08:57.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.092 Nvme0n1 : 10.00 6925.80 27.05 0.00 0.00 0.00 0.00 0.00 00:08:57.092 =================================================================================================================== 00:08:57.092 Total : 6925.80 27.05 0.00 0.00 0.00 0.00 0.00 00:08:57.092 00:08:57.092 00:08:57.092 Latency(us) 00:08:57.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.092 Nvme0n1 : 10.01 6934.84 27.09 0.00 0.00 18450.91 11796.48 139174.63 00:08:57.092 =================================================================================================================== 00:08:57.092 Total : 6934.84 27.09 0.00 0.00 18450.91 11796.48 139174.63 00:08:57.092 0 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66104 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66104 ']' 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66104 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66104 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:57.092 killing process with pid 66104 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66104' 00:08:57.092 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.092 00:08:57.092 Latency(us) 00:08:57.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.092 =================================================================================================================== 00:08:57.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66104 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66104 00:08:57.092 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.350 22:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.609 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:57.609 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:57.865 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:57.865 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:57.865 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65731 00:08:57.865 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65731 00:08:58.123 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65731 Killed "${NVMF_APP[@]}" "$@" 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66266 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66266 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66266 ']' 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.123 22:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 [2024-07-15 22:39:13.516363] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:08:58.123 [2024-07-15 22:39:13.516495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.123 [2024-07-15 22:39:13.655135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.381 [2024-07-15 22:39:13.807507] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.381 [2024-07-15 22:39:13.807632] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.381 [2024-07-15 22:39:13.807652] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.381 [2024-07-15 22:39:13.807666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.381 [2024-07-15 22:39:13.807680] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.381 [2024-07-15 22:39:13.807739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.381 [2024-07-15 22:39:13.867550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.948 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.206 [2024-07-15 22:39:14.753910] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:59.206 [2024-07-15 22:39:14.754770] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:59.206 [2024-07-15 22:39:14.754975] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:59.464 22:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.722 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 897a6ff3-a947-4ece-bf4a-fb2370cf816f -t 2000 00:08:59.981 [ 00:08:59.981 { 00:08:59.981 "name": "897a6ff3-a947-4ece-bf4a-fb2370cf816f", 00:08:59.981 "aliases": [ 00:08:59.981 "lvs/lvol" 00:08:59.981 ], 00:08:59.981 "product_name": "Logical Volume", 00:08:59.981 "block_size": 4096, 00:08:59.981 "num_blocks": 38912, 00:08:59.981 "uuid": "897a6ff3-a947-4ece-bf4a-fb2370cf816f", 00:08:59.981 "assigned_rate_limits": { 00:08:59.981 "rw_ios_per_sec": 0, 00:08:59.981 "rw_mbytes_per_sec": 0, 00:08:59.981 "r_mbytes_per_sec": 0, 00:08:59.981 "w_mbytes_per_sec": 0 00:08:59.981 }, 00:08:59.981 "claimed": false, 00:08:59.981 "zoned": false, 00:08:59.981 "supported_io_types": { 00:08:59.981 "read": true, 00:08:59.981 "write": true, 00:08:59.981 "unmap": true, 00:08:59.981 "flush": false, 00:08:59.981 "reset": true, 00:08:59.981 "nvme_admin": false, 00:08:59.981 "nvme_io": false, 00:08:59.981 "nvme_io_md": false, 00:08:59.981 "write_zeroes": true, 00:08:59.981 "zcopy": false, 00:08:59.981 "get_zone_info": false, 00:08:59.981 "zone_management": false, 00:08:59.981 "zone_append": false, 00:08:59.981 "compare": false, 00:08:59.981 "compare_and_write": false, 00:08:59.981 "abort": false, 00:08:59.981 "seek_hole": true, 00:08:59.981 "seek_data": true, 00:08:59.981 "copy": false, 00:08:59.981 "nvme_iov_md": false 00:08:59.981 }, 00:08:59.981 "driver_specific": { 00:08:59.981 "lvol": { 00:08:59.981 "lvol_store_uuid": "60f9218a-cb93-48e6-a53a-df00df9705fa", 00:08:59.981 "base_bdev": "aio_bdev", 00:08:59.981 "thin_provision": false, 00:08:59.981 "num_allocated_clusters": 38, 00:08:59.981 "snapshot": false, 00:08:59.981 "clone": false, 00:08:59.981 "esnap_clone": false 00:08:59.981 } 00:08:59.981 } 00:08:59.981 } 00:08:59.981 ] 00:08:59.981 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:59.981 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:08:59.981 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:00.241 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:00.241 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:00.241 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:00.500 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:00.500 22:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.759 [2024-07-15 22:39:16.163447] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:00.759 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:01.018 request: 00:09:01.018 { 00:09:01.018 "uuid": "60f9218a-cb93-48e6-a53a-df00df9705fa", 00:09:01.018 "method": "bdev_lvol_get_lvstores", 00:09:01.018 "req_id": 1 00:09:01.018 } 00:09:01.018 Got JSON-RPC error response 00:09:01.018 response: 00:09:01.018 { 00:09:01.018 "code": -19, 00:09:01.018 "message": "No such device" 00:09:01.018 } 00:09:01.018 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:01.018 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.018 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.018 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.018 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.276 aio_bdev 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:01.276 22:39:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.534 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 897a6ff3-a947-4ece-bf4a-fb2370cf816f -t 2000 00:09:01.792 [ 00:09:01.792 { 00:09:01.793 "name": "897a6ff3-a947-4ece-bf4a-fb2370cf816f", 00:09:01.793 "aliases": [ 00:09:01.793 "lvs/lvol" 00:09:01.793 ], 00:09:01.793 "product_name": "Logical Volume", 00:09:01.793 "block_size": 4096, 00:09:01.793 "num_blocks": 38912, 00:09:01.793 "uuid": "897a6ff3-a947-4ece-bf4a-fb2370cf816f", 00:09:01.793 "assigned_rate_limits": { 00:09:01.793 "rw_ios_per_sec": 0, 00:09:01.793 "rw_mbytes_per_sec": 0, 00:09:01.793 "r_mbytes_per_sec": 0, 00:09:01.793 "w_mbytes_per_sec": 0 00:09:01.793 }, 00:09:01.793 "claimed": false, 00:09:01.793 "zoned": false, 00:09:01.793 "supported_io_types": { 00:09:01.793 "read": true, 00:09:01.793 "write": true, 00:09:01.793 "unmap": true, 00:09:01.793 "flush": false, 00:09:01.793 "reset": true, 00:09:01.793 "nvme_admin": false, 00:09:01.793 "nvme_io": false, 00:09:01.793 "nvme_io_md": false, 00:09:01.793 "write_zeroes": true, 00:09:01.793 "zcopy": false, 00:09:01.793 "get_zone_info": false, 00:09:01.793 "zone_management": false, 00:09:01.793 "zone_append": false, 00:09:01.793 "compare": false, 00:09:01.793 "compare_and_write": false, 00:09:01.793 "abort": false, 00:09:01.793 "seek_hole": true, 00:09:01.793 "seek_data": true, 00:09:01.793 "copy": false, 00:09:01.793 "nvme_iov_md": false 00:09:01.793 }, 00:09:01.793 "driver_specific": { 00:09:01.793 "lvol": { 00:09:01.793 "lvol_store_uuid": "60f9218a-cb93-48e6-a53a-df00df9705fa", 00:09:01.793 "base_bdev": "aio_bdev", 00:09:01.793 "thin_provision": false, 00:09:01.793 "num_allocated_clusters": 38, 00:09:01.793 "snapshot": false, 00:09:01.793 "clone": false, 00:09:01.793 "esnap_clone": false 00:09:01.793 } 00:09:01.793 } 00:09:01.793 } 00:09:01.793 ] 00:09:01.793 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:01.793 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:01.793 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:02.051 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:02.051 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:02.051 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:02.310 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:02.310 22:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 897a6ff3-a947-4ece-bf4a-fb2370cf816f 00:09:02.568 22:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60f9218a-cb93-48e6-a53a-df00df9705fa 00:09:02.827 22:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.394 22:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:03.698 00:09:03.698 real 0m21.739s 00:09:03.698 user 0m45.672s 00:09:03.698 sys 0m8.254s 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.698 ************************************ 00:09:03.698 END TEST lvs_grow_dirty 00:09:03.698 ************************************ 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:03.698 nvmf_trace.0 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.698 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.956 rmmod nvme_tcp 00:09:03.956 rmmod nvme_fabrics 00:09:03.956 rmmod nvme_keyring 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66266 ']' 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66266 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66266 ']' 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66266 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66266 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.956 killing process with pid 66266 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66266' 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66266 00:09:03.956 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66266 00:09:04.214 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:04.215 00:09:04.215 real 0m42.905s 00:09:04.215 user 1m10.038s 00:09:04.215 sys 0m11.539s 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.215 22:39:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:04.215 ************************************ 00:09:04.215 END TEST nvmf_lvs_grow 00:09:04.215 ************************************ 00:09:04.215 22:39:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.215 22:39:19 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:04.215 22:39:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.215 22:39:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.215 22:39:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.215 ************************************ 00:09:04.215 START TEST nvmf_bdev_io_wait 00:09:04.215 ************************************ 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:04.215 * Looking for test storage... 00:09:04.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.215 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:04.473 Cannot find device "nvmf_tgt_br" 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.473 Cannot find device "nvmf_tgt_br2" 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:04.473 Cannot find device "nvmf_tgt_br" 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:04.473 Cannot find device "nvmf_tgt_br2" 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:04.473 22:39:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:04.473 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.473 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.473 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.473 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:04.473 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:04.473 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:04.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:09:04.732 00:09:04.732 --- 10.0.0.2 ping statistics --- 00:09:04.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.732 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:04.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:04.732 00:09:04.732 --- 10.0.0.3 ping statistics --- 00:09:04.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.732 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:04.732 00:09:04.732 --- 10.0.0.1 ping statistics --- 00:09:04.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.732 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:04.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66580 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66580 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66580 ']' 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.732 22:39:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:04.732 [2024-07-15 22:39:20.207375] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:04.732 [2024-07-15 22:39:20.207839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.990 [2024-07-15 22:39:20.346136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.990 [2024-07-15 22:39:20.499688] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.990 [2024-07-15 22:39:20.499954] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.990 [2024-07-15 22:39:20.500089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.990 [2024-07-15 22:39:20.500230] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.990 [2024-07-15 22:39:20.500286] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.990 [2024-07-15 22:39:20.500541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.990 [2024-07-15 22:39:20.500722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.990 [2024-07-15 22:39:20.500792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.990 [2024-07-15 22:39:20.500793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 [2024-07-15 22:39:21.352834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 [2024-07-15 22:39:21.365152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 Malloc0 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.935 [2024-07-15 22:39:21.429611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66621 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66623 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.935 { 00:09:05.935 "params": { 00:09:05.935 "name": "Nvme$subsystem", 00:09:05.935 "trtype": "$TEST_TRANSPORT", 00:09:05.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.935 "adrfam": "ipv4", 00:09:05.935 "trsvcid": "$NVMF_PORT", 00:09:05.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.935 "hdgst": ${hdgst:-false}, 00:09:05.935 "ddgst": ${ddgst:-false} 00:09:05.935 }, 00:09:05.935 "method": "bdev_nvme_attach_controller" 00:09:05.935 } 00:09:05.935 EOF 00:09:05.935 )") 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66625 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.935 { 00:09:05.935 "params": { 00:09:05.935 "name": "Nvme$subsystem", 00:09:05.935 "trtype": "$TEST_TRANSPORT", 00:09:05.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.935 "adrfam": "ipv4", 00:09:05.935 "trsvcid": "$NVMF_PORT", 00:09:05.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.935 "hdgst": ${hdgst:-false}, 00:09:05.935 "ddgst": ${ddgst:-false} 00:09:05.935 }, 00:09:05.935 "method": "bdev_nvme_attach_controller" 00:09:05.935 } 00:09:05.935 EOF 00:09:05.935 )") 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66628 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.935 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.935 { 00:09:05.935 "params": { 00:09:05.935 "name": "Nvme$subsystem", 00:09:05.935 "trtype": "$TEST_TRANSPORT", 00:09:05.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.935 "adrfam": "ipv4", 00:09:05.935 "trsvcid": "$NVMF_PORT", 00:09:05.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.936 "hdgst": ${hdgst:-false}, 00:09:05.936 "ddgst": ${ddgst:-false} 00:09:05.936 }, 00:09:05.936 "method": "bdev_nvme_attach_controller" 00:09:05.936 } 00:09:05.936 EOF 00:09:05.936 )") 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.936 { 00:09:05.936 "params": { 00:09:05.936 "name": "Nvme$subsystem", 00:09:05.936 "trtype": "$TEST_TRANSPORT", 00:09:05.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.936 "adrfam": "ipv4", 00:09:05.936 "trsvcid": "$NVMF_PORT", 00:09:05.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.936 "hdgst": ${hdgst:-false}, 00:09:05.936 "ddgst": ${ddgst:-false} 00:09:05.936 }, 00:09:05.936 "method": "bdev_nvme_attach_controller" 00:09:05.936 } 00:09:05.936 EOF 00:09:05.936 )") 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.936 "params": { 00:09:05.936 "name": "Nvme1", 00:09:05.936 "trtype": "tcp", 00:09:05.936 "traddr": "10.0.0.2", 00:09:05.936 "adrfam": "ipv4", 00:09:05.936 "trsvcid": "4420", 00:09:05.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.936 "hdgst": false, 00:09:05.936 "ddgst": false 00:09:05.936 }, 00:09:05.936 "method": "bdev_nvme_attach_controller" 00:09:05.936 }' 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.936 "params": { 00:09:05.936 "name": "Nvme1", 00:09:05.936 "trtype": "tcp", 00:09:05.936 "traddr": "10.0.0.2", 00:09:05.936 "adrfam": "ipv4", 00:09:05.936 "trsvcid": "4420", 00:09:05.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.936 "hdgst": false, 00:09:05.936 "ddgst": false 00:09:05.936 }, 00:09:05.936 "method": "bdev_nvme_attach_controller" 00:09:05.936 }' 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.936 "params": { 00:09:05.936 "name": "Nvme1", 00:09:05.936 "trtype": "tcp", 00:09:05.936 "traddr": "10.0.0.2", 00:09:05.936 "adrfam": "ipv4", 00:09:05.936 "trsvcid": "4420", 00:09:05.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.936 "hdgst": false, 00:09:05.936 "ddgst": false 00:09:05.936 }, 00:09:05.936 "method": "bdev_nvme_attach_controller" 00:09:05.936 }' 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.936 "params": { 00:09:05.936 "name": "Nvme1", 00:09:05.936 "trtype": "tcp", 00:09:05.936 "traddr": "10.0.0.2", 00:09:05.936 "adrfam": "ipv4", 00:09:05.936 "trsvcid": "4420", 00:09:05.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.936 "hdgst": false, 00:09:05.936 "ddgst": false 00:09:05.936 }, 00:09:05.936 "method": "bdev_nvme_attach_controller" 00:09:05.936 }' 00:09:05.936 22:39:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66621 00:09:05.936 [2024-07-15 22:39:21.501396] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:05.936 [2024-07-15 22:39:21.501736] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 22:39:21.501752] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:05.936 [2024-07-15 22:39:21.501857] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:05.936 .cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:06.193 [2024-07-15 22:39:21.518468] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:06.193 [2024-07-15 22:39:21.518793] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:06.193 [2024-07-15 22:39:21.533159] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:06.193 [2024-07-15 22:39:21.533529] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:06.193 [2024-07-15 22:39:21.716594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.450 [2024-07-15 22:39:21.788618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.450 [2024-07-15 22:39:21.836866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:06.450 [2024-07-15 22:39:21.859939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.450 [2024-07-15 22:39:21.880491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:06.450 [2024-07-15 22:39:21.887800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.450 [2024-07-15 22:39:21.925792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.450 [2024-07-15 22:39:21.938419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.450 [2024-07-15 22:39:21.956675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:06.450 [2024-07-15 22:39:22.003187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.450 Running I/O for 1 seconds... 00:09:06.722 Running I/O for 1 seconds... 00:09:06.722 [2024-07-15 22:39:22.034008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.722 [2024-07-15 22:39:22.080383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.722 Running I/O for 1 seconds... 00:09:06.722 Running I/O for 1 seconds... 00:09:07.664 00:09:07.664 Latency(us) 00:09:07.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.664 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:07.664 Nvme1n1 : 1.00 173073.98 676.07 0.00 0.00 736.88 404.01 997.93 00:09:07.664 =================================================================================================================== 00:09:07.664 Total : 173073.98 676.07 0.00 0.00 736.88 404.01 997.93 00:09:07.664 00:09:07.664 Latency(us) 00:09:07.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.664 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:07.664 Nvme1n1 : 1.02 5848.91 22.85 0.00 0.00 21549.13 8996.31 42657.98 00:09:07.664 =================================================================================================================== 00:09:07.664 Total : 5848.91 22.85 0.00 0.00 21549.13 8996.31 42657.98 00:09:07.664 00:09:07.664 Latency(us) 00:09:07.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.664 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:07.664 Nvme1n1 : 1.01 5848.63 22.85 0.00 0.00 21812.07 5838.66 44087.85 00:09:07.664 =================================================================================================================== 00:09:07.664 Total : 5848.63 22.85 0.00 0.00 21812.07 5838.66 44087.85 00:09:07.664 00:09:07.664 Latency(us) 00:09:07.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.664 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:07.664 Nvme1n1 : 1.01 8958.96 35.00 0.00 0.00 14219.30 6553.60 25380.31 00:09:07.664 =================================================================================================================== 00:09:07.664 Total : 8958.96 35.00 0.00 0.00 14219.30 6553.60 25380.31 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66623 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66625 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66628 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.921 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:08.178 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.179 rmmod nvme_tcp 00:09:08.179 rmmod nvme_fabrics 00:09:08.179 rmmod nvme_keyring 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66580 ']' 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66580 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66580 ']' 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66580 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66580 00:09:08.179 killing process with pid 66580 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66580' 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66580 00:09:08.179 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66580 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:08.437 00:09:08.437 real 0m4.165s 00:09:08.437 user 0m18.058s 00:09:08.437 sys 0m2.255s 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.437 ************************************ 00:09:08.437 END TEST nvmf_bdev_io_wait 00:09:08.437 ************************************ 00:09:08.437 22:39:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.437 22:39:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.437 22:39:23 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:08.437 22:39:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.437 22:39:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.437 22:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.437 ************************************ 00:09:08.437 START TEST nvmf_queue_depth 00:09:08.437 ************************************ 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:08.437 * Looking for test storage... 00:09:08.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.437 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.438 22:39:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:08.438 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:08.695 Cannot find device "nvmf_tgt_br" 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.695 Cannot find device "nvmf_tgt_br2" 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:08.695 Cannot find device "nvmf_tgt_br" 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:08.695 Cannot find device "nvmf_tgt_br2" 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:08.695 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:08.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:09:08.952 00:09:08.952 --- 10.0.0.2 ping statistics --- 00:09:08.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.952 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:08.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:08.952 00:09:08.952 --- 10.0.0.3 ping statistics --- 00:09:08.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.952 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:08.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:08.952 00:09:08.952 --- 10.0.0.1 ping statistics --- 00:09:08.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.952 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66853 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66853 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66853 ']' 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.952 22:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.952 [2024-07-15 22:39:24.378412] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:08.952 [2024-07-15 22:39:24.378539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.210 [2024-07-15 22:39:24.521277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.210 [2024-07-15 22:39:24.653079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.210 [2024-07-15 22:39:24.653153] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.210 [2024-07-15 22:39:24.653167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.210 [2024-07-15 22:39:24.653177] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.210 [2024-07-15 22:39:24.653186] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.210 [2024-07-15 22:39:24.653219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.210 [2024-07-15 22:39:24.710578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.776 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.034 [2024-07-15 22:39:25.344969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.034 Malloc0 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.034 [2024-07-15 22:39:25.407554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66885 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66885 /var/tmp/bdevperf.sock 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66885 ']' 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.034 22:39:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:10.034 [2024-07-15 22:39:25.466648] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:10.034 [2024-07-15 22:39:25.466758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66885 ] 00:09:10.292 [2024-07-15 22:39:25.605109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.292 [2024-07-15 22:39:25.741620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.292 [2024-07-15 22:39:25.800003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:11.224 NVMe0n1 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.224 22:39:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:11.224 Running I/O for 10 seconds... 00:09:21.210 00:09:21.210 Latency(us) 00:09:21.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.210 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:21.210 Verification LBA range: start 0x0 length 0x4000 00:09:21.210 NVMe0n1 : 10.07 7650.27 29.88 0.00 0.00 133235.43 13107.20 97231.59 00:09:21.210 =================================================================================================================== 00:09:21.210 Total : 7650.27 29.88 0.00 0.00 133235.43 13107.20 97231.59 00:09:21.210 0 00:09:21.210 22:39:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66885 00:09:21.210 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66885 ']' 00:09:21.210 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66885 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66885 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:21.468 killing process with pid 66885 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66885' 00:09:21.468 Received shutdown signal, test time was about 10.000000 seconds 00:09:21.468 00:09:21.468 Latency(us) 00:09:21.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.468 =================================================================================================================== 00:09:21.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66885 00:09:21.468 22:39:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66885 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.726 rmmod nvme_tcp 00:09:21.726 rmmod nvme_fabrics 00:09:21.726 rmmod nvme_keyring 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66853 ']' 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66853 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66853 ']' 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66853 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66853 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:21.726 killing process with pid 66853 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66853' 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66853 00:09:21.726 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66853 00:09:21.983 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.983 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.983 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:21.984 00:09:21.984 real 0m13.562s 00:09:21.984 user 0m23.402s 00:09:21.984 sys 0m2.364s 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.984 22:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.984 ************************************ 00:09:21.984 END TEST nvmf_queue_depth 00:09:21.984 ************************************ 00:09:21.984 22:39:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:21.984 22:39:37 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.984 22:39:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.984 22:39:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.984 22:39:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.984 ************************************ 00:09:21.984 START TEST nvmf_target_multipath 00:09:21.984 ************************************ 00:09:21.984 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:22.241 * Looking for test storage... 00:09:22.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.241 22:39:37 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:22.242 Cannot find device "nvmf_tgt_br" 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.242 Cannot find device "nvmf_tgt_br2" 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:22.242 Cannot find device "nvmf_tgt_br" 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:22.242 Cannot find device "nvmf_tgt_br2" 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.242 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:22.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:22.500 00:09:22.500 --- 10.0.0.2 ping statistics --- 00:09:22.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.500 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:22.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:22.500 00:09:22.500 --- 10.0.0.3 ping statistics --- 00:09:22.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.500 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:22.500 00:09:22.500 --- 10.0.0.1 ping statistics --- 00:09:22.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.500 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67205 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67205 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67205 ']' 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.501 22:39:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.501 [2024-07-15 22:39:38.038929] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:22.501 [2024-07-15 22:39:38.039025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.758 [2024-07-15 22:39:38.177829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.758 [2024-07-15 22:39:38.315482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.758 [2024-07-15 22:39:38.315588] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.758 [2024-07-15 22:39:38.315605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.758 [2024-07-15 22:39:38.315616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.758 [2024-07-15 22:39:38.315626] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.758 [2024-07-15 22:39:38.315819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.758 [2024-07-15 22:39:38.315919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.758 [2024-07-15 22:39:38.316725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.758 [2024-07-15 22:39:38.316735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.016 [2024-07-15 22:39:38.374491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.581 22:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:23.838 [2024-07-15 22:39:39.310727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.838 22:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:24.096 Malloc0 00:09:24.096 22:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:24.354 22:39:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.611 22:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.869 [2024-07-15 22:39:40.339585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.869 22:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:25.127 [2024-07-15 22:39:40.563771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:25.127 22:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:25.385 22:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:25.385 22:39:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.385 22:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.385 22:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.385 22:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.385 22:39:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.290 22:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.290 22:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.290 22:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67300 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:27.549 22:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:27.549 [global] 00:09:27.549 thread=1 00:09:27.549 invalidate=1 00:09:27.549 rw=randrw 00:09:27.549 time_based=1 00:09:27.549 runtime=6 00:09:27.549 ioengine=libaio 00:09:27.549 direct=1 00:09:27.549 bs=4096 00:09:27.549 iodepth=128 00:09:27.549 norandommap=0 00:09:27.549 numjobs=1 00:09:27.549 00:09:27.549 verify_dump=1 00:09:27.549 verify_backlog=512 00:09:27.549 verify_state_save=0 00:09:27.549 do_verify=1 00:09:27.549 verify=crc32c-intel 00:09:27.549 [job0] 00:09:27.549 filename=/dev/nvme0n1 00:09:27.549 Could not set queue depth (nvme0n1) 00:09:27.549 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.549 fio-3.35 00:09:27.549 Starting 1 thread 00:09:28.491 22:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:28.749 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:29.007 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:29.266 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:29.525 22:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67300 00:09:33.715 00:09:33.715 job0: (groupid=0, jobs=1): err= 0: pid=67321: Mon Jul 15 22:39:49 2024 00:09:33.715 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(244MiB/6003msec) 00:09:33.715 slat (usec): min=6, max=8519, avg=56.53, stdev=224.88 00:09:33.715 clat (usec): min=1719, max=20227, avg=8440.51, stdev=1544.41 00:09:33.715 lat (usec): min=1845, max=20257, avg=8497.04, stdev=1550.13 00:09:33.715 clat percentiles (usec): 00:09:33.715 | 1.00th=[ 4424], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 7570], 00:09:33.715 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:09:33.715 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[11994], 00:09:33.715 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14746], 99.95th=[16450], 00:09:33.715 | 99.99th=[16581] 00:09:33.715 bw ( KiB/s): min= 2416, max=26680, per=51.72%, avg=21502.64, stdev=8242.47, samples=11 00:09:33.715 iops : min= 604, max= 6670, avg=5375.64, stdev=2060.19, samples=11 00:09:33.715 write: IOPS=6175, BW=24.1MiB/s (25.3MB/s)(127MiB/5263msec); 0 zone resets 00:09:33.715 slat (usec): min=13, max=3631, avg=65.59, stdev=151.44 00:09:33.715 clat (usec): min=1842, max=14843, avg=7229.50, stdev=1278.31 00:09:33.715 lat (usec): min=1881, max=14868, avg=7295.09, stdev=1282.85 00:09:33.715 clat percentiles (usec): 00:09:33.715 | 1.00th=[ 3523], 5.00th=[ 4359], 10.00th=[ 5669], 20.00th=[ 6718], 00:09:33.715 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:09:33.715 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:33.715 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13304], 99.95th=[14091], 00:09:33.715 | 99.99th=[14615] 00:09:33.715 bw ( KiB/s): min= 2496, max=26712, per=87.32%, avg=21570.73, stdev=8053.81, samples=11 00:09:33.715 iops : min= 624, max= 6678, avg=5392.64, stdev=2013.43, samples=11 00:09:33.715 lat (msec) : 2=0.01%, 4=1.30%, 10=90.71%, 20=7.97%, 50=0.01% 00:09:33.715 cpu : usr=5.31%, sys=23.19%, ctx=5537, majf=0, minf=114 00:09:33.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:33.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.715 issued rwts: total=62386,32502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.715 00:09:33.715 Run status group 0 (all jobs): 00:09:33.715 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=244MiB (256MB), run=6003-6003msec 00:09:33.715 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=127MiB (133MB), run=5263-5263msec 00:09:33.715 00:09:33.715 Disk stats (read/write): 00:09:33.715 nvme0n1: ios=61484/31990, merge=0/0, ticks=495947/215826, in_queue=711773, util=98.58% 00:09:33.715 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:33.973 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67396 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:34.232 22:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:34.232 [global] 00:09:34.232 thread=1 00:09:34.232 invalidate=1 00:09:34.232 rw=randrw 00:09:34.232 time_based=1 00:09:34.232 runtime=6 00:09:34.232 ioengine=libaio 00:09:34.232 direct=1 00:09:34.232 bs=4096 00:09:34.232 iodepth=128 00:09:34.232 norandommap=0 00:09:34.232 numjobs=1 00:09:34.232 00:09:34.232 verify_dump=1 00:09:34.232 verify_backlog=512 00:09:34.232 verify_state_save=0 00:09:34.232 do_verify=1 00:09:34.232 verify=crc32c-intel 00:09:34.232 [job0] 00:09:34.232 filename=/dev/nvme0n1 00:09:34.490 Could not set queue depth (nvme0n1) 00:09:34.490 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.490 fio-3.35 00:09:34.490 Starting 1 thread 00:09:35.424 22:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:35.682 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:35.940 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:35.940 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:35.940 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.940 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:35.940 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:35.941 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:36.198 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:36.457 22:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67396 00:09:40.666 00:09:40.667 job0: (groupid=0, jobs=1): err= 0: pid=67417: Mon Jul 15 22:39:56 2024 00:09:40.667 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(268MiB/6002msec) 00:09:40.667 slat (usec): min=3, max=6461, avg=43.71, stdev=197.01 00:09:40.667 clat (usec): min=383, max=17695, avg=7603.33, stdev=2066.96 00:09:40.667 lat (usec): min=394, max=17709, avg=7647.04, stdev=2083.99 00:09:40.667 clat percentiles (usec): 00:09:40.667 | 1.00th=[ 2900], 5.00th=[ 4080], 10.00th=[ 4752], 20.00th=[ 5735], 00:09:40.667 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8160], 00:09:40.667 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[10945], 00:09:40.667 | 99.00th=[13435], 99.50th=[14877], 99.90th=[16188], 99.95th=[16909], 00:09:40.667 | 99.99th=[17695] 00:09:40.667 bw ( KiB/s): min=14096, max=43192, per=54.42%, avg=24868.36, stdev=8572.47, samples=11 00:09:40.667 iops : min= 3524, max=10798, avg=6217.09, stdev=2143.12, samples=11 00:09:40.667 write: IOPS=6886, BW=26.9MiB/s (28.2MB/s)(145MiB/5389msec); 0 zone resets 00:09:40.667 slat (usec): min=12, max=4100, avg=55.48, stdev=130.50 00:09:40.667 clat (usec): min=324, max=16597, avg=6485.06, stdev=1856.90 00:09:40.667 lat (usec): min=353, max=16634, avg=6540.54, stdev=1870.88 00:09:40.667 clat percentiles (usec): 00:09:40.667 | 1.00th=[ 2638], 5.00th=[ 3359], 10.00th=[ 3785], 20.00th=[ 4424], 00:09:40.667 | 30.00th=[ 5211], 40.00th=[ 6587], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:40.667 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8356], 95.00th=[ 8848], 00:09:40.667 | 99.00th=[10683], 99.50th=[11338], 99.90th=[13698], 99.95th=[14353], 00:09:40.667 | 99.99th=[15139] 00:09:40.667 bw ( KiB/s): min=14336, max=42112, per=90.25%, avg=24861.82, stdev=8247.93, samples=11 00:09:40.667 iops : min= 3584, max=10528, avg=6215.45, stdev=2061.98, samples=11 00:09:40.667 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:40.667 lat (msec) : 2=0.20%, 4=7.11%, 10=87.27%, 20=5.38% 00:09:40.667 cpu : usr=6.08%, sys=25.17%, ctx=6282, majf=0, minf=108 00:09:40.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:40.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.667 issued rwts: total=68570,37111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.667 00:09:40.667 Run status group 0 (all jobs): 00:09:40.667 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6002-6002msec 00:09:40.667 WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=145MiB (152MB), run=5389-5389msec 00:09:40.667 00:09:40.667 Disk stats (read/write): 00:09:40.667 nvme0n1: ios=67792/36498, merge=0/0, ticks=484155/215411, in_queue=699566, util=98.65% 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:40.667 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.984 rmmod nvme_tcp 00:09:40.984 rmmod nvme_fabrics 00:09:40.984 rmmod nvme_keyring 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67205 ']' 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67205 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67205 ']' 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67205 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67205 00:09:40.984 killing process with pid 67205 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67205' 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67205 00:09:40.984 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67205 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:41.253 ************************************ 00:09:41.253 END TEST nvmf_target_multipath 00:09:41.253 ************************************ 00:09:41.253 00:09:41.253 real 0m19.306s 00:09:41.253 user 1m12.316s 00:09:41.253 sys 0m9.863s 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.253 22:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:41.513 22:39:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:41.513 22:39:56 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:41.513 22:39:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:41.513 22:39:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.513 22:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.513 ************************************ 00:09:41.513 START TEST nvmf_zcopy 00:09:41.513 ************************************ 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:41.513 * Looking for test storage... 00:09:41.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.513 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:41.514 Cannot find device "nvmf_tgt_br" 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.514 Cannot find device "nvmf_tgt_br2" 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:41.514 22:39:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:41.514 Cannot find device "nvmf_tgt_br" 00:09:41.514 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:41.514 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:41.514 Cannot find device "nvmf_tgt_br2" 00:09:41.514 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:41.514 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:41.514 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:41.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:09:41.773 00:09:41.773 --- 10.0.0.2 ping statistics --- 00:09:41.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.773 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:41.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:41.773 00:09:41.773 --- 10.0.0.3 ping statistics --- 00:09:41.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.773 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:41.773 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:41.773 00:09:41.773 --- 10.0.0.1 ping statistics --- 00:09:41.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.774 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.774 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.034 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67672 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67672 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67672 ']' 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.035 22:39:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.035 [2024-07-15 22:39:57.413345] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:42.035 [2024-07-15 22:39:57.413497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.035 [2024-07-15 22:39:57.555987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.293 [2024-07-15 22:39:57.715259] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.293 [2024-07-15 22:39:57.715375] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.293 [2024-07-15 22:39:57.715400] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.293 [2024-07-15 22:39:57.715419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.293 [2024-07-15 22:39:57.715436] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.293 [2024-07-15 22:39:57.715504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.293 [2024-07-15 22:39:57.774910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 [2024-07-15 22:39:58.353354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 [2024-07-15 22:39:58.369526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 malloc0 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:42.864 { 00:09:42.864 "params": { 00:09:42.864 "name": "Nvme$subsystem", 00:09:42.864 "trtype": "$TEST_TRANSPORT", 00:09:42.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.864 "adrfam": "ipv4", 00:09:42.864 "trsvcid": "$NVMF_PORT", 00:09:42.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.864 "hdgst": ${hdgst:-false}, 00:09:42.864 "ddgst": ${ddgst:-false} 00:09:42.864 }, 00:09:42.864 "method": "bdev_nvme_attach_controller" 00:09:42.864 } 00:09:42.864 EOF 00:09:42.864 )") 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:42.864 22:39:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:42.864 "params": { 00:09:42.864 "name": "Nvme1", 00:09:42.864 "trtype": "tcp", 00:09:42.864 "traddr": "10.0.0.2", 00:09:42.864 "adrfam": "ipv4", 00:09:42.864 "trsvcid": "4420", 00:09:42.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.864 "hdgst": false, 00:09:42.864 "ddgst": false 00:09:42.864 }, 00:09:42.864 "method": "bdev_nvme_attach_controller" 00:09:42.864 }' 00:09:43.122 [2024-07-15 22:39:58.469646] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:43.122 [2024-07-15 22:39:58.469789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67705 ] 00:09:43.122 [2024-07-15 22:39:58.613901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.379 [2024-07-15 22:39:58.752910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.379 [2024-07-15 22:39:58.819046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.379 Running I/O for 10 seconds... 00:09:53.404 00:09:53.404 Latency(us) 00:09:53.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.404 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:53.404 Verification LBA range: start 0x0 length 0x1000 00:09:53.404 Nvme1n1 : 10.01 5918.23 46.24 0.00 0.00 21558.90 2904.44 33602.09 00:09:53.404 =================================================================================================================== 00:09:53.404 Total : 5918.23 46.24 0.00 0.00 21558.90 2904.44 33602.09 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67821 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:53.714 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.715 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.715 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.715 { 00:09:53.715 "params": { 00:09:53.715 "name": "Nvme$subsystem", 00:09:53.715 "trtype": "$TEST_TRANSPORT", 00:09:53.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.715 "adrfam": "ipv4", 00:09:53.715 "trsvcid": "$NVMF_PORT", 00:09:53.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.715 "hdgst": ${hdgst:-false}, 00:09:53.715 "ddgst": ${ddgst:-false} 00:09:53.715 }, 00:09:53.715 "method": "bdev_nvme_attach_controller" 00:09:53.715 } 00:09:53.715 EOF 00:09:53.715 )") 00:09:53.715 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:53.715 [2024-07-15 22:40:09.183783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.183828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.715 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:53.715 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:53.715 22:40:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.715 "params": { 00:09:53.715 "name": "Nvme1", 00:09:53.715 "trtype": "tcp", 00:09:53.715 "traddr": "10.0.0.2", 00:09:53.715 "adrfam": "ipv4", 00:09:53.715 "trsvcid": "4420", 00:09:53.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.715 "hdgst": false, 00:09:53.715 "ddgst": false 00:09:53.715 }, 00:09:53.715 "method": "bdev_nvme_attach_controller" 00:09:53.715 }' 00:09:53.715 [2024-07-15 22:40:09.195741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.195771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.715 [2024-07-15 22:40:09.207742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.207771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.715 [2024-07-15 22:40:09.217614] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:09:53.715 [2024-07-15 22:40:09.217694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67821 ] 00:09:53.715 [2024-07-15 22:40:09.219749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.219780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.715 [2024-07-15 22:40:09.231750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.231776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.715 [2024-07-15 22:40:09.244010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.244129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.715 [2024-07-15 22:40:09.255797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.715 [2024-07-15 22:40:09.255846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.267806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.267844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.279817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.279862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.291812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.291861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.299851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.299918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.311813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.311859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.323826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.323869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.335848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.335903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.347821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.347859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.354234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.985 [2024-07-15 22:40:09.359823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.359874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.371844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.371886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.383878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.383961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.395894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.395963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.407907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.408007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.419840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.419889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.431824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.431869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.443830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.443862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.453597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.985 [2024-07-15 22:40:09.455885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.455952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.467855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.467907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.479836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.479883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.491866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.491913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.503842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.503875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.515855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.985 [2024-07-15 22:40:09.515907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.985 [2024-07-15 22:40:09.519072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.986 [2024-07-15 22:40:09.527865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.986 [2024-07-15 22:40:09.527921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.986 [2024-07-15 22:40:09.539896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.986 [2024-07-15 22:40:09.539956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.986 [2024-07-15 22:40:09.551873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.986 [2024-07-15 22:40:09.551907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.563903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.563964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.575928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.575985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.587959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.588031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.599961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.600028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.611942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.611987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.623971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.624010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 Running I/O for 5 seconds... 00:09:54.245 [2024-07-15 22:40:09.635972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.636014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.654255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.654305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.670265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.670345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.687515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.687589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.703087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.703123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.718679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.718762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.734401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.734438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.746033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.746080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.760705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.760750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.776096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.776131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.794101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.794139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.245 [2024-07-15 22:40:09.807947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.245 [2024-07-15 22:40:09.808019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.823264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.823312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.842654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.842699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.857913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.857981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.873502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.873608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.889808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.889847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.906131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.906175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.922945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.922989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.938319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.938391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.953677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.953733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.963740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.963776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.980338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.980414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:09.996198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:09.996306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:10.014088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:10.014141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:10.029062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:10.029121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:10.043872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:10.043925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.505 [2024-07-15 22:40:10.058498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.505 [2024-07-15 22:40:10.058544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.764 [2024-07-15 22:40:10.073740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.764 [2024-07-15 22:40:10.073776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.764 [2024-07-15 22:40:10.082932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.764 [2024-07-15 22:40:10.082978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.764 [2024-07-15 22:40:10.099208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.764 [2024-07-15 22:40:10.099243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.764 [2024-07-15 22:40:10.117084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.764 [2024-07-15 22:40:10.117131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.131560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.131620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.146882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.146926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.162737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.162771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.179362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.179426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.195841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.195877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.212595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.212659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.230235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.230290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.244833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.244882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.258811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.258866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.274282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.274333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.285127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.285164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.301672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.301741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.765 [2024-07-15 22:40:10.318014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.765 [2024-07-15 22:40:10.318058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.333651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.333776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.351121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.351219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.368121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.368169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.382605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.382649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.399088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.399135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.415148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.415194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.428139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.428196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.446672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.446718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.460741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.460789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.478652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.478688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.492341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.492387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.509290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.509349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.525710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.525756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.537402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.537455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.551777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.551813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.568404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.568443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.024 [2024-07-15 22:40:10.585569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.024 [2024-07-15 22:40:10.585646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.601389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.601443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.617391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.617430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.634063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.634116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.651178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.651230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.668461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.668503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.682870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.682911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.698261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.698315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.715474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.715527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.731643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.731696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.749960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.750031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.763848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.763887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.780093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.780134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.796440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.796480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.814749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.814789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.830300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.830340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.283 [2024-07-15 22:40:10.848345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.283 [2024-07-15 22:40:10.848384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.864305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.864353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.880483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.880523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.896879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.896931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.906871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.906942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.921264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.921317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.936895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.936947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.955190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.955243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.969471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.969512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:10.984857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:10.984910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.002541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.002605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.017230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.017282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.034919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.034971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.050141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.050193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.067100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.067152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.083557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.083621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.543 [2024-07-15 22:40:11.101031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.543 [2024-07-15 22:40:11.101085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.116977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.117030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.132231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.132295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.151193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.151246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.166131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.166202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.183727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.183781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.199247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.199301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.209695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.209732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.224004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.224057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.241442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.241496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.257802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.257856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.273440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.273493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.284340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.284380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.299388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.299440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.314454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.314506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.329991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.330044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.340144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.340198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.803 [2024-07-15 22:40:11.356698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.803 [2024-07-15 22:40:11.356752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.372731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.372782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.383103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.383143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.398721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.398761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.414820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.414860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.432242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.432305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.448871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.448927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.465835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.465874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.482388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.482441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.062 [2024-07-15 22:40:11.497993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.062 [2024-07-15 22:40:11.498046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.507746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.507814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.523969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.524022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.541045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.541099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.556704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.556775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.572006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.572060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.581487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.581541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.597346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.597400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.612286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.612328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.063 [2024-07-15 22:40:11.628226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.063 [2024-07-15 22:40:11.628276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.645372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.645426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.662147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.662206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.676892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.676961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.692805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.692858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.710030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.710069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.725419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.725458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.740777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.740819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.750908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.750962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.766373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.766428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.781143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.781204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.797377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.797432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.812594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.812631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.828234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.828301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.847130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.847167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.861599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.861650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.322 [2024-07-15 22:40:11.875298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.322 [2024-07-15 22:40:11.875366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.890907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.581 [2024-07-15 22:40:11.890947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.900526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.581 [2024-07-15 22:40:11.900586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.917389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.581 [2024-07-15 22:40:11.917430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.933451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.581 [2024-07-15 22:40:11.933506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.950428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.581 [2024-07-15 22:40:11.950467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.966118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.581 [2024-07-15 22:40:11.966171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.581 [2024-07-15 22:40:11.976189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:11.976266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:11.991253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:11.991293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.002425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.002482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.017938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.017992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.033881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.033951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.044048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.044088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.059769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.059821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.075258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.075312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.090420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.090473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.107131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.107184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.122605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.122674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.132727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.132781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.582 [2024-07-15 22:40:12.147861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.582 [2024-07-15 22:40:12.147912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.158313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.158367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.172920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.172988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.182325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.182379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.197767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.197819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.215282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.215322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.231964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.232016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.248016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.248070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.267167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.267220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.282487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.282528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.300670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.300709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.315320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.315374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.330955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.331008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.348923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.348977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.365686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.365723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.382077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.382131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.841 [2024-07-15 22:40:12.398135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.841 [2024-07-15 22:40:12.398188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.413106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.413143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.429018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.429073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.439287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.439328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.454559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.454610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.469930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.469983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.485898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.485954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.495765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.495818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.510625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.510678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.525391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.525445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.540704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.540757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.556037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.556090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.565719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.565774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.580647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.580699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.595517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.595595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.605441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.605511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.621719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.621755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.637312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.637366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.653013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.653084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.101 [2024-07-15 22:40:12.663403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.101 [2024-07-15 22:40:12.663473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.678958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.679063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.695006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.695078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.705133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.705202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.720906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.720971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.736807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.736867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.750517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.750605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.766425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.766501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.776759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.776823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.792966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.793027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.810239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.810291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.827115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.827183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.842831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.842884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.852952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.853008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.868278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.868334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.884347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.884397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.899916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.899972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.909984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.910024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.360 [2024-07-15 22:40:12.926324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.360 [2024-07-15 22:40:12.926377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.619 [2024-07-15 22:40:12.941622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.619 [2024-07-15 22:40:12.941663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.619 [2024-07-15 22:40:12.956528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.619 [2024-07-15 22:40:12.956586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.619 [2024-07-15 22:40:12.972366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.619 [2024-07-15 22:40:12.972406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.619 [2024-07-15 22:40:12.990533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.619 [2024-07-15 22:40:12.990586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.619 [2024-07-15 22:40:13.005510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.619 [2024-07-15 22:40:13.005551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.015422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.015463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.032438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.032478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.047811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.047851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.062619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.062678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.078238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.078292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.096636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.096688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.111772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.111825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.121868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.121922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.137465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.137518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.152972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.153043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.168955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.169009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.620 [2024-07-15 22:40:13.185377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.620 [2024-07-15 22:40:13.185418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.202226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.202283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.219101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.219143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.234453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.234508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.249834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.249890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.259774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.259828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.274906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.274963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.290051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.290105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.305708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.305749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.322865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.322920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.338764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.338801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.357727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.357781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.372086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.372140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.387125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.387165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.402261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.402302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.411647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.411687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.428386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.428438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.878 [2024-07-15 22:40:13.443938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.878 [2024-07-15 22:40:13.443993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.459923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.459990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.476394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.476440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.493282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.493364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.508152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.508212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.523092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.523165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.538700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.538766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.554968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.555021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.571901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.571961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.587327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.587404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.603220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.603293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.613220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.613275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.628857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.628911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.645053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.645124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.661809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.661883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.678006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.678075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.137 [2024-07-15 22:40:13.695122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.137 [2024-07-15 22:40:13.695173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.711390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.711461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.721322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.721375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.732827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.732880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.743128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.743181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.753703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.753772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.765082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.765135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.776063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.776115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.787848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.787933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.799757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.799795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.811519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.811589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.823183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.823237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.835113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.835165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.846622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.846688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.858160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.858213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.870438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.870491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.881967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.882020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.895168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.895222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.905181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.396 [2024-07-15 22:40:13.905233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.396 [2024-07-15 22:40:13.917401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.397 [2024-07-15 22:40:13.917487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.397 [2024-07-15 22:40:13.929244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.397 [2024-07-15 22:40:13.929297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.397 [2024-07-15 22:40:13.941413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.397 [2024-07-15 22:40:13.941500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.397 [2024-07-15 22:40:13.952898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.397 [2024-07-15 22:40:13.952951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:13.964652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:13.964704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:13.976895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:13.976949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:13.988480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:13.988520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.000196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.000237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.011667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.011718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.022868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.022920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.033839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.033892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.045278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.045333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.057010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.057064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.068760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.068813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.080629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.080671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.092089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.092141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.103896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.103949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.115340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.115393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.126947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.127003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.139274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.139343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.149207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.149260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.161649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.161702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.172906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.172975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.184385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.184424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.195396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.195449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.206995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.207037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.656 [2024-07-15 22:40:14.218682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.656 [2024-07-15 22:40:14.218721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.230865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.230918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.242304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.242357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.254542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.254603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.266446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.266486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.278421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.278462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.289792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.289859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.301475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.301529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.313444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.313484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.325623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.325704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.337638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.337690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.349551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.349638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.361493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.361548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.373239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.373292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.384720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.384774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.396715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.396770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.408277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.408318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.420149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.420188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.432707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.432778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.444676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.444714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.456135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.456174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.467771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.467810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.915 [2024-07-15 22:40:14.479173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.915 [2024-07-15 22:40:14.479213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.490870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.490910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.502582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.502622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.513480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.513519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.524431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.524468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.541735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.541772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.558609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.558671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.574784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.574836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.592054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.592088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.608097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.608152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.625325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.625378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.640755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.640807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 00:09:59.175 Latency(us) 00:09:59.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.175 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:59.175 Nvme1n1 : 5.01 11017.27 86.07 0.00 0.00 11602.18 4915.20 26333.56 00:09:59.175 =================================================================================================================== 00:09:59.175 Total : 11017.27 86.07 0.00 0.00 11602.18 4915.20 26333.56 00:09:59.175 [2024-07-15 22:40:14.650108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.650145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.662094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.662143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.674113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.674152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.686112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.686149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.698120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.698159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.710126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.710168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.722133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.722171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.175 [2024-07-15 22:40:14.734138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.175 [2024-07-15 22:40:14.734182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.746159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.746198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.758163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.758202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.770165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.770204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.782178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.782215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.794148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.794177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.806170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.806208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.818167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.818203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.830164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.830194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.842174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.842207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.854208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.854260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.866180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.866210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 [2024-07-15 22:40:14.878195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.434 [2024-07-15 22:40:14.878221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.434 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67821) - No such process 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67821 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.434 delay0 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.434 22:40:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.435 22:40:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:59.693 [2024-07-15 22:40:15.082342] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:06.254 Initializing NVMe Controllers 00:10:06.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:06.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:06.254 Initialization complete. Launching workers. 00:10:06.254 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 371 00:10:06.254 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 658, failed to submit 33 00:10:06.254 success 562, unsuccess 96, failed 0 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.254 rmmod nvme_tcp 00:10:06.254 rmmod nvme_fabrics 00:10:06.254 rmmod nvme_keyring 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67672 ']' 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67672 00:10:06.254 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67672 ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67672 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67672 00:10:06.255 killing process with pid 67672 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67672' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67672 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67672 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:06.255 ************************************ 00:10:06.255 END TEST nvmf_zcopy 00:10:06.255 ************************************ 00:10:06.255 00:10:06.255 real 0m24.773s 00:10:06.255 user 0m40.398s 00:10:06.255 sys 0m6.938s 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.255 22:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.255 22:40:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:06.255 22:40:21 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:06.255 22:40:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:06.255 22:40:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.255 22:40:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.255 ************************************ 00:10:06.255 START TEST nvmf_nmic 00:10:06.255 ************************************ 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:06.255 * Looking for test storage... 00:10:06.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:06.255 Cannot find device "nvmf_tgt_br" 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:06.255 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.514 Cannot find device "nvmf_tgt_br2" 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:06.514 Cannot find device "nvmf_tgt_br" 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:06.514 Cannot find device "nvmf_tgt_br2" 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:06.514 22:40:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.514 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:06.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:10:06.773 00:10:06.773 --- 10.0.0.2 ping statistics --- 00:10:06.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.773 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:06.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:06.773 00:10:06.773 --- 10.0.0.3 ping statistics --- 00:10:06.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.773 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:06.773 00:10:06.773 --- 10.0.0.1 ping statistics --- 00:10:06.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.773 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68143 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68143 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68143 ']' 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.773 22:40:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.773 [2024-07-15 22:40:22.176488] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:10:06.773 [2024-07-15 22:40:22.176626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.773 [2024-07-15 22:40:22.318377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.031 [2024-07-15 22:40:22.446946] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.031 [2024-07-15 22:40:22.447259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.031 [2024-07-15 22:40:22.447416] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.031 [2024-07-15 22:40:22.447632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.031 [2024-07-15 22:40:22.447804] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.031 [2024-07-15 22:40:22.448103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.031 [2024-07-15 22:40:22.448242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.031 [2024-07-15 22:40:22.448327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.031 [2024-07-15 22:40:22.448826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.031 [2024-07-15 22:40:22.503866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:07.595 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.595 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:07.595 22:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.595 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.595 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.852 22:40:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.852 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.852 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.852 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.852 [2024-07-15 22:40:23.204059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.852 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 Malloc0 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 [2024-07-15 22:40:23.276770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.853 test case1: single bdev can't be used in multiple subsystems 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 [2024-07-15 22:40:23.300524] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:07.853 [2024-07-15 22:40:23.300586] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:07.853 [2024-07-15 22:40:23.300603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.853 request: 00:10:07.853 { 00:10:07.853 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:07.853 "namespace": { 00:10:07.853 "bdev_name": "Malloc0", 00:10:07.853 "no_auto_visible": false 00:10:07.853 }, 00:10:07.853 "method": "nvmf_subsystem_add_ns", 00:10:07.853 "req_id": 1 00:10:07.853 } 00:10:07.853 Got JSON-RPC error response 00:10:07.853 response: 00:10:07.853 { 00:10:07.853 "code": -32602, 00:10:07.853 "message": "Invalid parameters" 00:10:07.853 } 00:10:07.853 Adding namespace failed - expected result. 00:10:07.853 test case2: host connect to nvmf target in multiple paths 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.853 [2024-07-15 22:40:23.312691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.853 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.110 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:08.110 22:40:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.110 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.110 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.110 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:08.110 22:40:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:10.639 22:40:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.639 [global] 00:10:10.639 thread=1 00:10:10.639 invalidate=1 00:10:10.639 rw=write 00:10:10.639 time_based=1 00:10:10.639 runtime=1 00:10:10.639 ioengine=libaio 00:10:10.639 direct=1 00:10:10.639 bs=4096 00:10:10.639 iodepth=1 00:10:10.639 norandommap=0 00:10:10.639 numjobs=1 00:10:10.639 00:10:10.639 verify_dump=1 00:10:10.639 verify_backlog=512 00:10:10.639 verify_state_save=0 00:10:10.639 do_verify=1 00:10:10.639 verify=crc32c-intel 00:10:10.639 [job0] 00:10:10.639 filename=/dev/nvme0n1 00:10:10.639 Could not set queue depth (nvme0n1) 00:10:10.639 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.639 fio-3.35 00:10:10.639 Starting 1 thread 00:10:11.615 00:10:11.615 job0: (groupid=0, jobs=1): err= 0: pid=68237: Mon Jul 15 22:40:26 2024 00:10:11.615 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:11.615 slat (nsec): min=12488, max=70725, avg=17486.00, stdev=5875.77 00:10:11.615 clat (usec): min=131, max=410, avg=171.39, stdev=20.66 00:10:11.615 lat (usec): min=147, max=440, avg=188.87, stdev=22.84 00:10:11.615 clat percentiles (usec): 00:10:11.615 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:11.615 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:11.615 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 204], 00:10:11.615 | 99.00th=[ 231], 99.50th=[ 249], 99.90th=[ 371], 99.95th=[ 379], 00:10:11.615 | 99.99th=[ 412] 00:10:11.615 write: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:10:11.615 slat (nsec): min=15177, max=97287, avg=24040.11, stdev=6004.96 00:10:11.615 clat (usec): min=81, max=332, avg=105.47, stdev=15.75 00:10:11.615 lat (usec): min=100, max=350, avg=129.51, stdev=17.63 00:10:11.615 clat percentiles (usec): 00:10:11.615 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:10:11.615 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 106], 00:10:11.615 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 124], 95.00th=[ 133], 00:10:11.615 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 265], 99.95th=[ 306], 00:10:11.615 | 99.99th=[ 334] 00:10:11.615 bw ( KiB/s): min=12824, max=12824, per=100.00%, avg=12824.00, stdev= 0.00, samples=1 00:10:11.615 iops : min= 3206, max= 3206, avg=3206.00, stdev= 0.00, samples=1 00:10:11.615 lat (usec) : 100=21.92%, 250=77.74%, 500=0.34% 00:10:11.615 cpu : usr=2.70%, sys=10.10%, ctx=6217, majf=0, minf=2 00:10:11.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.615 issued rwts: total=3072,3145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.615 00:10:11.615 Run status group 0 (all jobs): 00:10:11.615 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:11.615 WRITE: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:10:11.615 00:10:11.615 Disk stats (read/write): 00:10:11.615 nvme0n1: ios=2663/3072, merge=0/0, ticks=496/365, in_queue=861, util=91.38% 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.615 22:40:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.615 rmmod nvme_tcp 00:10:11.615 rmmod nvme_fabrics 00:10:11.615 rmmod nvme_keyring 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68143 ']' 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68143 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68143 ']' 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68143 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68143 00:10:11.615 killing process with pid 68143 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68143' 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68143 00:10:11.615 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68143 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:11.872 00:10:11.872 real 0m5.724s 00:10:11.872 user 0m18.193s 00:10:11.872 sys 0m2.328s 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.872 22:40:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.872 ************************************ 00:10:11.872 END TEST nvmf_nmic 00:10:11.872 ************************************ 00:10:12.130 22:40:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:12.130 22:40:27 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:12.130 22:40:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.130 22:40:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.130 22:40:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.130 ************************************ 00:10:12.130 START TEST nvmf_fio_target 00:10:12.130 ************************************ 00:10:12.130 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:12.130 * Looking for test storage... 00:10:12.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.130 22:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.130 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:12.130 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.130 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.130 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.131 Cannot find device "nvmf_tgt_br" 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.131 Cannot find device "nvmf_tgt_br2" 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.131 Cannot find device "nvmf_tgt_br" 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.131 Cannot find device "nvmf_tgt_br2" 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.131 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:10:12.390 00:10:12.390 --- 10.0.0.2 ping statistics --- 00:10:12.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.390 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:12.390 00:10:12.390 --- 10.0.0.3 ping statistics --- 00:10:12.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.390 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:12.390 00:10:12.390 --- 10.0.0.1 ping statistics --- 00:10:12.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.390 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.390 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68415 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68415 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68415 ']' 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.649 22:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.649 [2024-07-15 22:40:28.011321] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:10:12.649 [2024-07-15 22:40:28.011432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.649 [2024-07-15 22:40:28.146827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.907 [2024-07-15 22:40:28.267392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.907 [2024-07-15 22:40:28.267640] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.907 [2024-07-15 22:40:28.267780] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.907 [2024-07-15 22:40:28.267913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.907 [2024-07-15 22:40:28.267948] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.907 [2024-07-15 22:40:28.268201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.907 [2024-07-15 22:40:28.268307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.907 [2024-07-15 22:40:28.268385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.907 [2024-07-15 22:40:28.268386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.907 [2024-07-15 22:40:28.323651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.473 22:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.473 22:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:13.473 22:40:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.732 22:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.732 22:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.732 22:40:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.732 22:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.990 [2024-07-15 22:40:29.354305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.990 22:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.249 22:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:14.249 22:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.507 22:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:14.507 22:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.766 22:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:14.766 22:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.023 22:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:15.023 22:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:15.281 22:40:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.540 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:15.540 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.797 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:15.797 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.055 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:16.055 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:16.326 22:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.608 22:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.608 22:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.867 22:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:16.867 22:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:17.125 22:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.383 [2024-07-15 22:40:32.743171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.383 22:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:17.641 22:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:17.902 22:40:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.432 22:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.432 22:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.432 22:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.433 22:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:20.433 22:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.433 22:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:20.433 22:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.433 [global] 00:10:20.433 thread=1 00:10:20.433 invalidate=1 00:10:20.433 rw=write 00:10:20.433 time_based=1 00:10:20.433 runtime=1 00:10:20.433 ioengine=libaio 00:10:20.433 direct=1 00:10:20.433 bs=4096 00:10:20.433 iodepth=1 00:10:20.433 norandommap=0 00:10:20.433 numjobs=1 00:10:20.433 00:10:20.433 verify_dump=1 00:10:20.433 verify_backlog=512 00:10:20.433 verify_state_save=0 00:10:20.433 do_verify=1 00:10:20.433 verify=crc32c-intel 00:10:20.433 [job0] 00:10:20.433 filename=/dev/nvme0n1 00:10:20.433 [job1] 00:10:20.433 filename=/dev/nvme0n2 00:10:20.433 [job2] 00:10:20.433 filename=/dev/nvme0n3 00:10:20.433 [job3] 00:10:20.433 filename=/dev/nvme0n4 00:10:20.433 Could not set queue depth (nvme0n1) 00:10:20.433 Could not set queue depth (nvme0n2) 00:10:20.433 Could not set queue depth (nvme0n3) 00:10:20.433 Could not set queue depth (nvme0n4) 00:10:20.433 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.433 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.433 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.433 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.433 fio-3.35 00:10:20.433 Starting 4 threads 00:10:21.368 00:10:21.368 job0: (groupid=0, jobs=1): err= 0: pid=68600: Mon Jul 15 22:40:36 2024 00:10:21.368 read: IOPS=1471, BW=5886KiB/s (6027kB/s)(5892KiB/1001msec) 00:10:21.368 slat (nsec): min=13786, max=90261, avg=28668.02, stdev=12091.01 00:10:21.368 clat (usec): min=172, max=1191, avg=404.02, stdev=132.27 00:10:21.368 lat (usec): min=188, max=1253, avg=432.68, stdev=141.12 00:10:21.368 clat percentiles (usec): 00:10:21.368 | 1.00th=[ 221], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 269], 00:10:21.368 | 30.00th=[ 289], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 478], 00:10:21.368 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 603], 00:10:21.368 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 1188], 00:10:21.368 | 99.99th=[ 1188] 00:10:21.368 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:21.368 slat (usec): min=19, max=104, avg=27.58, stdev= 7.66 00:10:21.368 clat (usec): min=99, max=506, avg=203.16, stdev=37.74 00:10:21.368 lat (usec): min=121, max=529, avg=230.74, stdev=38.91 00:10:21.368 clat percentiles (usec): 00:10:21.368 | 1.00th=[ 110], 5.00th=[ 157], 10.00th=[ 169], 20.00th=[ 180], 00:10:21.368 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:10:21.368 | 70.00th=[ 212], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:10:21.368 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 453], 99.95th=[ 506], 00:10:21.368 | 99.99th=[ 506] 00:10:21.368 bw ( KiB/s): min= 8192, max= 8192, per=23.59%, avg=8192.00, stdev= 0.00, samples=1 00:10:21.368 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:21.368 lat (usec) : 100=0.03%, 250=48.99%, 500=34.23%, 750=16.72% 00:10:21.368 lat (msec) : 2=0.03% 00:10:21.368 cpu : usr=2.30%, sys=6.30%, ctx=3013, majf=0, minf=11 00:10:21.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.368 issued rwts: total=1473,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.368 job1: (groupid=0, jobs=1): err= 0: pid=68601: Mon Jul 15 22:40:36 2024 00:10:21.368 read: IOPS=1895, BW=7580KiB/s (7762kB/s)(7588KiB/1001msec) 00:10:21.368 slat (nsec): min=8751, max=43480, avg=15441.38, stdev=3865.62 00:10:21.369 clat (usec): min=138, max=6370, avg=328.59, stdev=260.77 00:10:21.369 lat (usec): min=153, max=6395, avg=344.03, stdev=261.37 00:10:21.369 clat percentiles (usec): 00:10:21.369 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:10:21.369 | 30.00th=[ 178], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 367], 00:10:21.369 | 70.00th=[ 392], 80.00th=[ 420], 90.00th=[ 478], 95.00th=[ 494], 00:10:21.369 | 99.00th=[ 553], 99.50th=[ 701], 99.90th=[ 6128], 99.95th=[ 6390], 00:10:21.369 | 99.99th=[ 6390] 00:10:21.369 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:21.369 slat (nsec): min=11860, max=86343, avg=19224.13, stdev=4556.01 00:10:21.369 clat (usec): min=89, max=718, avg=147.03, stdev=36.93 00:10:21.369 lat (usec): min=108, max=745, avg=166.25, stdev=36.34 00:10:21.369 clat percentiles (usec): 00:10:21.369 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 114], 00:10:21.369 | 30.00th=[ 119], 40.00th=[ 127], 50.00th=[ 145], 60.00th=[ 157], 00:10:21.369 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 206], 00:10:21.369 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 260], 99.95th=[ 262], 00:10:21.369 | 99.99th=[ 717] 00:10:21.369 bw ( KiB/s): min=10368, max=10368, per=29.86%, avg=10368.00, stdev= 0.00, samples=1 00:10:21.369 iops : min= 2592, max= 2592, avg=2592.00, stdev= 0.00, samples=1 00:10:21.369 lat (usec) : 100=1.67%, 250=66.87%, 500=29.48%, 750=1.77% 00:10:21.369 lat (msec) : 2=0.08%, 4=0.08%, 10=0.05% 00:10:21.369 cpu : usr=1.60%, sys=5.50%, ctx=3946, majf=0, minf=10 00:10:21.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.369 issued rwts: total=1897,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.369 job2: (groupid=0, jobs=1): err= 0: pid=68602: Mon Jul 15 22:40:36 2024 00:10:21.369 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:21.369 slat (nsec): min=9050, max=55216, avg=15446.92, stdev=4126.47 00:10:21.369 clat (usec): min=162, max=541, avg=333.61, stdev=54.72 00:10:21.369 lat (usec): min=190, max=557, avg=349.06, stdev=54.07 00:10:21.369 clat percentiles (usec): 00:10:21.369 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 273], 00:10:21.369 | 30.00th=[ 297], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:10:21.369 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 412], 00:10:21.369 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 510], 99.95th=[ 545], 00:10:21.369 | 99.99th=[ 545] 00:10:21.369 write: IOPS=2030, BW=8124KiB/s (8319kB/s)(8132KiB/1001msec); 0 zone resets 00:10:21.369 slat (nsec): min=11683, max=79898, avg=20937.38, stdev=4701.56 00:10:21.369 clat (usec): min=114, max=481, avg=204.30, stdev=34.62 00:10:21.369 lat (usec): min=143, max=503, avg=225.24, stdev=35.91 00:10:21.369 clat percentiles (usec): 00:10:21.369 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:10:21.369 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:10:21.369 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 251], 00:10:21.369 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 420], 99.95th=[ 437], 00:10:21.369 | 99.99th=[ 482] 00:10:21.369 bw ( KiB/s): min= 8192, max= 8192, per=23.59%, avg=8192.00, stdev= 0.00, samples=1 00:10:21.369 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:21.369 lat (usec) : 250=56.63%, 500=43.32%, 750=0.06% 00:10:21.369 cpu : usr=1.50%, sys=5.20%, ctx=3569, majf=0, minf=11 00:10:21.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.369 issued rwts: total=1536,2033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.369 job3: (groupid=0, jobs=1): err= 0: pid=68603: Mon Jul 15 22:40:36 2024 00:10:21.369 read: IOPS=2564, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:21.369 slat (nsec): min=11820, max=63745, avg=16898.93, stdev=5345.61 00:10:21.369 clat (usec): min=145, max=3166, avg=181.09, stdev=63.48 00:10:21.369 lat (usec): min=158, max=3199, avg=197.99, stdev=64.31 00:10:21.369 clat percentiles (usec): 00:10:21.369 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:21.369 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:10:21.369 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:10:21.369 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 685], 99.95th=[ 1037], 00:10:21.369 | 99.99th=[ 3163] 00:10:21.369 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:21.369 slat (usec): min=13, max=106, avg=24.89, stdev= 9.81 00:10:21.369 clat (usec): min=102, max=518, avg=131.68, stdev=13.74 00:10:21.369 lat (usec): min=120, max=576, avg=156.57, stdev=18.55 00:10:21.369 clat percentiles (usec): 00:10:21.369 | 1.00th=[ 108], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 122], 00:10:21.369 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:10:21.369 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:10:21.369 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 202], 99.95th=[ 247], 00:10:21.369 | 99.99th=[ 519] 00:10:21.369 bw ( KiB/s): min=12288, max=12288, per=35.39%, avg=12288.00, stdev= 0.00, samples=1 00:10:21.369 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:21.369 lat (usec) : 250=99.93%, 750=0.04% 00:10:21.369 lat (msec) : 2=0.02%, 4=0.02% 00:10:21.369 cpu : usr=2.70%, sys=9.30%, ctx=5644, majf=0, minf=3 00:10:21.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.369 issued rwts: total=2567,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.369 00:10:21.369 Run status group 0 (all jobs): 00:10:21.369 READ: bw=29.2MiB/s (30.6MB/s), 5886KiB/s-10.0MiB/s (6027kB/s-10.5MB/s), io=29.2MiB (30.6MB), run=1001-1001msec 00:10:21.369 WRITE: bw=33.9MiB/s (35.6MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=33.9MiB (35.6MB), run=1001-1001msec 00:10:21.369 00:10:21.369 Disk stats (read/write): 00:10:21.369 nvme0n1: ios=1265/1536, merge=0/0, ticks=515/326, in_queue=841, util=87.86% 00:10:21.369 nvme0n2: ios=1574/1997, merge=0/0, ticks=482/289, in_queue=771, util=86.31% 00:10:21.369 nvme0n3: ios=1398/1536, merge=0/0, ticks=463/319, in_queue=782, util=89.16% 00:10:21.369 nvme0n4: ios=2198/2560, merge=0/0, ticks=391/369, in_queue=760, util=89.40% 00:10:21.369 22:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:21.369 [global] 00:10:21.369 thread=1 00:10:21.369 invalidate=1 00:10:21.369 rw=randwrite 00:10:21.369 time_based=1 00:10:21.369 runtime=1 00:10:21.369 ioengine=libaio 00:10:21.369 direct=1 00:10:21.369 bs=4096 00:10:21.369 iodepth=1 00:10:21.369 norandommap=0 00:10:21.369 numjobs=1 00:10:21.369 00:10:21.369 verify_dump=1 00:10:21.369 verify_backlog=512 00:10:21.369 verify_state_save=0 00:10:21.369 do_verify=1 00:10:21.369 verify=crc32c-intel 00:10:21.369 [job0] 00:10:21.369 filename=/dev/nvme0n1 00:10:21.369 [job1] 00:10:21.369 filename=/dev/nvme0n2 00:10:21.369 [job2] 00:10:21.369 filename=/dev/nvme0n3 00:10:21.369 [job3] 00:10:21.369 filename=/dev/nvme0n4 00:10:21.369 Could not set queue depth (nvme0n1) 00:10:21.369 Could not set queue depth (nvme0n2) 00:10:21.369 Could not set queue depth (nvme0n3) 00:10:21.369 Could not set queue depth (nvme0n4) 00:10:21.639 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.639 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.639 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.639 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.639 fio-3.35 00:10:21.639 Starting 4 threads 00:10:22.601 00:10:22.601 job0: (groupid=0, jobs=1): err= 0: pid=68662: Mon Jul 15 22:40:38 2024 00:10:22.601 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:22.601 slat (nsec): min=7891, max=63680, avg=12957.16, stdev=7446.58 00:10:22.601 clat (usec): min=161, max=746, avg=307.29, stdev=35.81 00:10:22.601 lat (usec): min=185, max=756, avg=320.25, stdev=38.17 00:10:22.601 clat percentiles (usec): 00:10:22.601 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:10:22.601 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:10:22.601 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 363], 00:10:22.602 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 586], 99.95th=[ 750], 00:10:22.602 | 99.99th=[ 750] 00:10:22.602 write: IOPS=1834, BW=7337KiB/s (7513kB/s)(7344KiB/1001msec); 0 zone resets 00:10:22.602 slat (nsec): min=10899, max=92686, avg=18971.14, stdev=8222.39 00:10:22.602 clat (usec): min=87, max=7157, avg=254.93, stdev=239.02 00:10:22.602 lat (usec): min=104, max=7191, avg=273.90, stdev=240.12 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 125], 5.00th=[ 178], 10.00th=[ 202], 20.00th=[ 215], 00:10:22.602 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 255], 00:10:22.602 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 310], 00:10:22.602 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 6259], 99.95th=[ 7177], 00:10:22.602 | 99.99th=[ 7177] 00:10:22.602 bw ( KiB/s): min= 8192, max= 8192, per=23.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:22.602 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:22.602 lat (usec) : 100=0.24%, 250=29.30%, 500=70.11%, 750=0.21% 00:10:22.602 lat (msec) : 2=0.06%, 10=0.09% 00:10:22.602 cpu : usr=1.70%, sys=4.10%, ctx=3439, majf=0, minf=11 00:10:22.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 issued rwts: total=1536,1836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.602 job1: (groupid=0, jobs=1): err= 0: pid=68663: Mon Jul 15 22:40:38 2024 00:10:22.602 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:22.602 slat (nsec): min=8004, max=56918, avg=15084.49, stdev=5690.08 00:10:22.602 clat (usec): min=164, max=744, avg=306.82, stdev=31.39 00:10:22.602 lat (usec): min=176, max=763, avg=321.90, stdev=31.85 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 285], 00:10:22.602 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:10:22.602 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 375], 00:10:22.602 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 510], 99.95th=[ 742], 00:10:22.602 | 99.99th=[ 742] 00:10:22.602 write: IOPS=1868, BW=7473KiB/s (7652kB/s)(7480KiB/1001msec); 0 zone resets 00:10:22.602 slat (usec): min=5, max=102, avg=21.50, stdev=15.77 00:10:22.602 clat (usec): min=119, max=443, avg=245.74, stdev=33.50 00:10:22.602 lat (usec): min=157, max=471, avg=267.24, stdev=37.71 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 208], 20.00th=[ 221], 00:10:22.602 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 253], 00:10:22.602 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:10:22.602 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 441], 99.95th=[ 445], 00:10:22.602 | 99.99th=[ 445] 00:10:22.602 bw ( KiB/s): min= 8192, max= 8192, per=23.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:22.602 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:22.602 lat (usec) : 250=30.39%, 500=69.52%, 750=0.09% 00:10:22.602 cpu : usr=1.60%, sys=4.90%, ctx=3778, majf=0, minf=9 00:10:22.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 issued rwts: total=1536,1870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.602 job2: (groupid=0, jobs=1): err= 0: pid=68664: Mon Jul 15 22:40:38 2024 00:10:22.602 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:10:22.602 slat (nsec): min=11047, max=58787, avg=13873.37, stdev=3470.71 00:10:22.602 clat (usec): min=144, max=561, avg=176.08, stdev=31.01 00:10:22.602 lat (usec): min=156, max=575, avg=189.95, stdev=33.10 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:10:22.602 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:10:22.602 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 208], 00:10:22.602 | 99.00th=[ 363], 99.50th=[ 392], 99.90th=[ 494], 99.95th=[ 537], 00:10:22.602 | 99.99th=[ 562] 00:10:22.602 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:22.602 slat (usec): min=14, max=174, avg=21.64, stdev= 8.32 00:10:22.602 clat (usec): min=3, max=1598, avg=132.99, stdev=35.98 00:10:22.602 lat (usec): min=117, max=1623, avg=154.62, stdev=36.81 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 103], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 115], 00:10:22.602 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 128], 60.00th=[ 133], 00:10:22.602 | 70.00th=[ 139], 80.00th=[ 151], 90.00th=[ 165], 95.00th=[ 178], 00:10:22.602 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 330], 99.95th=[ 578], 00:10:22.602 | 99.99th=[ 1598] 00:10:22.602 bw ( KiB/s): min=12288, max=12288, per=35.52%, avg=12288.00, stdev= 0.00, samples=1 00:10:22.602 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:22.602 lat (usec) : 4=0.02%, 20=0.02%, 50=0.02%, 100=0.07%, 250=98.91% 00:10:22.602 lat (usec) : 500=0.90%, 750=0.05% 00:10:22.602 lat (msec) : 2=0.02% 00:10:22.602 cpu : usr=2.50%, sys=8.00%, ctx=5803, majf=0, minf=16 00:10:22.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 issued rwts: total=2710,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.602 job3: (groupid=0, jobs=1): err= 0: pid=68665: Mon Jul 15 22:40:38 2024 00:10:22.602 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:22.602 slat (nsec): min=8054, max=90188, avg=14357.50, stdev=5759.64 00:10:22.602 clat (usec): min=166, max=782, avg=307.84, stdev=33.47 00:10:22.602 lat (usec): min=179, max=793, avg=322.19, stdev=34.18 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 285], 00:10:22.602 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:10:22.602 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 371], 00:10:22.602 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 510], 99.95th=[ 783], 00:10:22.602 | 99.99th=[ 783] 00:10:22.602 write: IOPS=1877, BW=7508KiB/s (7689kB/s)(7516KiB/1001msec); 0 zone resets 00:10:22.602 slat (usec): min=5, max=112, avg=22.63, stdev=13.84 00:10:22.602 clat (usec): min=107, max=463, avg=242.99, stdev=34.70 00:10:22.602 lat (usec): min=126, max=494, avg=265.62, stdev=36.82 00:10:22.602 clat percentiles (usec): 00:10:22.602 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 208], 20.00th=[ 223], 00:10:22.602 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:10:22.602 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:10:22.602 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 429], 99.95th=[ 465], 00:10:22.602 | 99.99th=[ 465] 00:10:22.602 bw ( KiB/s): min= 8192, max= 8192, per=23.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:22.602 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:22.602 lat (usec) : 250=35.61%, 500=64.33%, 750=0.03%, 1000=0.03% 00:10:22.602 cpu : usr=1.50%, sys=5.10%, ctx=3731, majf=0, minf=9 00:10:22.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.602 issued rwts: total=1536,1879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.602 00:10:22.602 Run status group 0 (all jobs): 00:10:22.602 READ: bw=28.6MiB/s (29.9MB/s), 6138KiB/s-10.6MiB/s (6285kB/s-11.1MB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:10:22.602 WRITE: bw=33.8MiB/s (35.4MB/s), 7337KiB/s-12.0MiB/s (7513kB/s-12.6MB/s), io=33.8MiB (35.5MB), run=1001-1001msec 00:10:22.602 00:10:22.602 Disk stats (read/write): 00:10:22.602 nvme0n1: ios=1417/1536, merge=0/0, ticks=419/344, in_queue=763, util=88.87% 00:10:22.602 nvme0n2: ios=1463/1536, merge=0/0, ticks=423/350, in_queue=773, util=89.39% 00:10:22.602 nvme0n3: ios=2577/2641, merge=0/0, ticks=464/358, in_queue=822, util=89.66% 00:10:22.602 nvme0n4: ios=1419/1536, merge=0/0, ticks=411/356, in_queue=767, util=89.92% 00:10:22.602 22:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:22.602 [global] 00:10:22.602 thread=1 00:10:22.602 invalidate=1 00:10:22.602 rw=write 00:10:22.602 time_based=1 00:10:22.602 runtime=1 00:10:22.602 ioengine=libaio 00:10:22.602 direct=1 00:10:22.602 bs=4096 00:10:22.602 iodepth=128 00:10:22.602 norandommap=0 00:10:22.602 numjobs=1 00:10:22.602 00:10:22.602 verify_dump=1 00:10:22.602 verify_backlog=512 00:10:22.602 verify_state_save=0 00:10:22.602 do_verify=1 00:10:22.602 verify=crc32c-intel 00:10:22.602 [job0] 00:10:22.602 filename=/dev/nvme0n1 00:10:22.602 [job1] 00:10:22.602 filename=/dev/nvme0n2 00:10:22.602 [job2] 00:10:22.602 filename=/dev/nvme0n3 00:10:22.602 [job3] 00:10:22.602 filename=/dev/nvme0n4 00:10:22.860 Could not set queue depth (nvme0n1) 00:10:22.860 Could not set queue depth (nvme0n2) 00:10:22.860 Could not set queue depth (nvme0n3) 00:10:22.860 Could not set queue depth (nvme0n4) 00:10:22.860 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.860 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.860 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.860 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:22.860 fio-3.35 00:10:22.860 Starting 4 threads 00:10:24.232 00:10:24.232 job0: (groupid=0, jobs=1): err= 0: pid=68724: Mon Jul 15 22:40:39 2024 00:10:24.232 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1002msec) 00:10:24.232 slat (usec): min=5, max=4837, avg=110.35, stdev=523.82 00:10:24.232 clat (usec): min=636, max=16633, avg=14542.78, stdev=1259.41 00:10:24.232 lat (usec): min=3889, max=16648, avg=14653.13, stdev=1147.93 00:10:24.232 clat percentiles (usec): 00:10:24.232 | 1.00th=[10814], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:10:24.232 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:10:24.232 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15533], 95.00th=[15795], 00:10:24.232 | 99.00th=[16188], 99.50th=[16581], 99.90th=[16581], 99.95th=[16581], 00:10:24.232 | 99.99th=[16581] 00:10:24.232 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:24.232 slat (usec): min=11, max=4028, avg=109.92, stdev=477.21 00:10:24.232 clat (usec): min=7679, max=17122, avg=14393.77, stdev=1082.60 00:10:24.232 lat (usec): min=7704, max=17162, avg=14503.69, stdev=977.34 00:10:24.232 clat percentiles (usec): 00:10:24.232 | 1.00th=[10814], 5.00th=[12780], 10.00th=[13435], 20.00th=[13829], 00:10:24.232 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:10:24.232 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15401], 95.00th=[15795], 00:10:24.232 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:10:24.232 | 99.99th=[17171] 00:10:24.232 bw ( KiB/s): min=17707, max=18440, per=37.51%, avg=18073.50, stdev=518.31, samples=2 00:10:24.232 iops : min= 4426, max= 4610, avg=4518.00, stdev=130.11, samples=2 00:10:24.232 lat (usec) : 750=0.01% 00:10:24.232 lat (msec) : 4=0.07%, 10=0.66%, 20=99.26% 00:10:24.232 cpu : usr=4.60%, sys=13.29%, ctx=277, majf=0, minf=5 00:10:24.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:24.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.232 issued rwts: total=4129,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.232 job1: (groupid=0, jobs=1): err= 0: pid=68725: Mon Jul 15 22:40:39 2024 00:10:24.232 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 00:10:24.232 slat (usec): min=6, max=9672, avg=295.11, stdev=1551.30 00:10:24.232 clat (usec): min=27616, max=40357, avg=38252.80, stdev=1775.96 00:10:24.232 lat (usec): min=36400, max=40389, avg=38547.91, stdev=878.07 00:10:24.232 clat percentiles (usec): 00:10:24.232 | 1.00th=[29492], 5.00th=[36439], 10.00th=[37487], 20.00th=[37487], 00:10:24.232 | 30.00th=[38011], 40.00th=[38536], 50.00th=[38536], 60.00th=[39060], 00:10:24.232 | 70.00th=[39060], 80.00th=[39060], 90.00th=[39584], 95.00th=[40109], 00:10:24.232 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:10:24.232 | 99.99th=[40109] 00:10:24.232 write: IOPS=1785, BW=7143KiB/s (7315kB/s)(7172KiB/1004msec); 0 zone resets 00:10:24.232 slat (usec): min=13, max=10266, avg=296.99, stdev=1504.08 00:10:24.232 clat (usec): min=3012, max=41260, avg=37316.20, stdev=4618.52 00:10:24.232 lat (usec): min=11324, max=41324, avg=37613.19, stdev=4366.85 00:10:24.232 clat percentiles (usec): 00:10:24.232 | 1.00th=[11863], 5.00th=[29492], 10.00th=[36963], 20.00th=[37487], 00:10:24.232 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38536], 00:10:24.232 | 70.00th=[38536], 80.00th=[39060], 90.00th=[40109], 95.00th=[40633], 00:10:24.232 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:24.232 | 99.99th=[41157] 00:10:24.232 bw ( KiB/s): min= 5136, max= 8192, per=13.83%, avg=6664.00, stdev=2160.92, samples=2 00:10:24.232 iops : min= 1284, max= 2048, avg=1666.00, stdev=540.23, samples=2 00:10:24.232 lat (msec) : 4=0.03%, 20=0.96%, 50=99.01% 00:10:24.232 cpu : usr=1.60%, sys=6.28%, ctx=105, majf=0, minf=14 00:10:24.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:10:24.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.232 issued rwts: total=1536,1793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.232 job2: (groupid=0, jobs=1): err= 0: pid=68726: Mon Jul 15 22:40:39 2024 00:10:24.232 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:10:24.232 slat (usec): min=7, max=9864, avg=295.11, stdev=1552.73 00:10:24.232 clat (usec): min=27409, max=40156, avg=38343.30, stdev=1798.45 00:10:24.232 lat (usec): min=36417, max=40170, avg=38638.41, stdev=902.37 00:10:24.232 clat percentiles (usec): 00:10:24.232 | 1.00th=[29492], 5.00th=[36439], 10.00th=[36963], 20.00th=[37487], 00:10:24.232 | 30.00th=[38011], 40.00th=[38536], 50.00th=[39060], 60.00th=[39060], 00:10:24.232 | 70.00th=[39060], 80.00th=[39584], 90.00th=[39584], 95.00th=[40109], 00:10:24.232 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:10:24.232 | 99.99th=[40109] 00:10:24.232 write: IOPS=1813, BW=7254KiB/s (7428kB/s)(7276KiB/1003msec); 0 zone resets 00:10:24.232 slat (usec): min=13, max=10437, avg=293.04, stdev=1499.26 00:10:24.232 clat (usec): min=2122, max=41578, avg=36635.58, stdev=6273.56 00:10:24.232 lat (usec): min=2148, max=41614, avg=36928.62, stdev=6120.28 00:10:24.232 clat percentiles (usec): 00:10:24.232 | 1.00th=[ 2704], 5.00th=[21103], 10.00th=[36439], 20.00th=[36963], 00:10:24.232 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:10:24.232 | 70.00th=[38536], 80.00th=[39060], 90.00th=[39584], 95.00th=[40633], 00:10:24.232 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:24.232 | 99.99th=[41681] 00:10:24.232 bw ( KiB/s): min= 5344, max= 8192, per=14.05%, avg=6768.00, stdev=2013.84, samples=2 00:10:24.232 iops : min= 1336, max= 2048, avg=1692.00, stdev=503.46, samples=2 00:10:24.232 lat (msec) : 4=0.80%, 20=0.95%, 50=98.24% 00:10:24.232 cpu : usr=1.50%, sys=5.39%, ctx=106, majf=0, minf=9 00:10:24.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:10:24.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.232 issued rwts: total=1536,1819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.232 job3: (groupid=0, jobs=1): err= 0: pid=68727: Mon Jul 15 22:40:39 2024 00:10:24.233 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:24.233 slat (usec): min=5, max=7438, avg=130.36, stdev=559.05 00:10:24.233 clat (usec): min=13194, max=31586, avg=17436.03, stdev=2052.86 00:10:24.233 lat (usec): min=13219, max=31596, avg=17566.39, stdev=2098.99 00:10:24.233 clat percentiles (usec): 00:10:24.233 | 1.00th=[14091], 5.00th=[15401], 10.00th=[15795], 20.00th=[16188], 00:10:24.233 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17433], 00:10:24.233 | 70.00th=[17695], 80.00th=[18482], 90.00th=[19530], 95.00th=[20317], 00:10:24.233 | 99.00th=[26608], 99.50th=[27919], 99.90th=[29230], 99.95th=[31589], 00:10:24.233 | 99.99th=[31589] 00:10:24.233 write: IOPS=3866, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1002msec); 0 zone resets 00:10:24.233 slat (usec): min=10, max=7568, avg=129.83, stdev=666.49 00:10:24.233 clat (usec): min=326, max=30925, avg=16529.73, stdev=2081.64 00:10:24.233 lat (usec): min=4134, max=30940, avg=16659.56, stdev=2167.38 00:10:24.233 clat percentiles (usec): 00:10:24.233 | 1.00th=[ 5211], 5.00th=[14484], 10.00th=[15008], 20.00th=[15664], 00:10:24.233 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16319], 60.00th=[16712], 00:10:24.233 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18482], 95.00th=[19792], 00:10:24.233 | 99.00th=[21103], 99.50th=[22676], 99.90th=[30802], 99.95th=[30802], 00:10:24.233 | 99.99th=[30802] 00:10:24.233 bw ( KiB/s): min=13584, max=16416, per=31.13%, avg=15000.00, stdev=2002.53, samples=2 00:10:24.233 iops : min= 3396, max= 4104, avg=3750.00, stdev=500.63, samples=2 00:10:24.233 lat (usec) : 500=0.01% 00:10:24.233 lat (msec) : 10=0.56%, 20=95.52%, 50=3.90% 00:10:24.233 cpu : usr=3.50%, sys=11.29%, ctx=265, majf=0, minf=13 00:10:24.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:24.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.233 issued rwts: total=3584,3874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.233 00:10:24.233 Run status group 0 (all jobs): 00:10:24.233 READ: bw=42.0MiB/s (44.0MB/s), 6120KiB/s-16.1MiB/s (6266kB/s-16.9MB/s), io=42.1MiB (44.2MB), run=1002-1004msec 00:10:24.233 WRITE: bw=47.1MiB/s (49.3MB/s), 7143KiB/s-18.0MiB/s (7315kB/s-18.8MB/s), io=47.2MiB (49.5MB), run=1002-1004msec 00:10:24.233 00:10:24.233 Disk stats (read/write): 00:10:24.233 nvme0n1: ios=3634/3968, merge=0/0, ticks=11875/12256, in_queue=24131, util=88.77% 00:10:24.233 nvme0n2: ios=1393/1536, merge=0/0, ticks=12422/13816, in_queue=26238, util=89.19% 00:10:24.233 nvme0n3: ios=1344/1536, merge=0/0, ticks=11489/12358, in_queue=23847, util=88.62% 00:10:24.233 nvme0n4: ios=3072/3494, merge=0/0, ticks=16757/16549, in_queue=33306, util=89.59% 00:10:24.233 22:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:24.233 [global] 00:10:24.233 thread=1 00:10:24.233 invalidate=1 00:10:24.233 rw=randwrite 00:10:24.233 time_based=1 00:10:24.233 runtime=1 00:10:24.233 ioengine=libaio 00:10:24.233 direct=1 00:10:24.233 bs=4096 00:10:24.233 iodepth=128 00:10:24.233 norandommap=0 00:10:24.233 numjobs=1 00:10:24.233 00:10:24.233 verify_dump=1 00:10:24.233 verify_backlog=512 00:10:24.233 verify_state_save=0 00:10:24.233 do_verify=1 00:10:24.233 verify=crc32c-intel 00:10:24.233 [job0] 00:10:24.233 filename=/dev/nvme0n1 00:10:24.233 [job1] 00:10:24.233 filename=/dev/nvme0n2 00:10:24.233 [job2] 00:10:24.233 filename=/dev/nvme0n3 00:10:24.233 [job3] 00:10:24.233 filename=/dev/nvme0n4 00:10:24.233 Could not set queue depth (nvme0n1) 00:10:24.233 Could not set queue depth (nvme0n2) 00:10:24.233 Could not set queue depth (nvme0n3) 00:10:24.233 Could not set queue depth (nvme0n4) 00:10:24.233 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.233 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.233 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.233 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:24.233 fio-3.35 00:10:24.233 Starting 4 threads 00:10:25.608 00:10:25.608 job0: (groupid=0, jobs=1): err= 0: pid=68780: Mon Jul 15 22:40:40 2024 00:10:25.608 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:25.608 slat (usec): min=4, max=3922, avg=90.70, stdev=352.54 00:10:25.608 clat (usec): min=8416, max=16566, avg=12014.13, stdev=1191.38 00:10:25.608 lat (usec): min=8432, max=16943, avg=12104.83, stdev=1228.83 00:10:25.608 clat percentiles (usec): 00:10:25.608 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10814], 20.00th=[10945], 00:10:25.608 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[12387], 00:10:25.608 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13435], 95.00th=[14091], 00:10:25.608 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15926], 99.95th=[16188], 00:10:25.608 | 99.99th=[16581] 00:10:25.608 write: IOPS=5553, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1003msec); 0 zone resets 00:10:25.608 slat (usec): min=10, max=3641, avg=88.28, stdev=372.80 00:10:25.608 clat (usec): min=2540, max=16519, avg=11710.84, stdev=1287.20 00:10:25.608 lat (usec): min=3032, max=16538, avg=11799.11, stdev=1331.63 00:10:25.608 clat percentiles (usec): 00:10:25.608 | 1.00th=[ 7439], 5.00th=[10028], 10.00th=[10421], 20.00th=[10945], 00:10:25.608 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:10:25.608 | 70.00th=[12125], 80.00th=[12387], 90.00th=[13042], 95.00th=[13566], 00:10:25.608 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16319], 99.95th=[16450], 00:10:25.608 | 99.99th=[16581] 00:10:25.608 bw ( KiB/s): min=20521, max=23064, per=28.98%, avg=21792.50, stdev=1798.17, samples=2 00:10:25.608 iops : min= 5130, max= 5766, avg=5448.00, stdev=449.72, samples=2 00:10:25.608 lat (msec) : 4=0.16%, 10=4.51%, 20=95.33% 00:10:25.609 cpu : usr=4.79%, sys=15.17%, ctx=527, majf=0, minf=9 00:10:25.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:25.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.609 issued rwts: total=5120,5570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.609 job1: (groupid=0, jobs=1): err= 0: pid=68781: Mon Jul 15 22:40:40 2024 00:10:25.609 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:25.609 slat (usec): min=4, max=9994, avg=129.99, stdev=582.09 00:10:25.609 clat (usec): min=10151, max=36861, avg=17063.34, stdev=7733.52 00:10:25.609 lat (usec): min=10189, max=36882, avg=17193.33, stdev=7783.79 00:10:25.609 clat percentiles (usec): 00:10:25.609 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:10:25.609 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:10:25.609 | 70.00th=[15008], 80.00th=[23725], 90.00th=[32637], 95.00th=[35914], 00:10:25.609 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:10:25.609 | 99.99th=[36963] 00:10:25.609 write: IOPS=4219, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec); 0 zone resets 00:10:25.609 slat (usec): min=10, max=5431, avg=102.61, stdev=480.93 00:10:25.609 clat (usec): min=693, max=25298, avg=13416.16, stdev=2837.83 00:10:25.609 lat (usec): min=2798, max=25324, avg=13518.76, stdev=2865.86 00:10:25.609 clat percentiles (usec): 00:10:25.609 | 1.00th=[ 8455], 5.00th=[10945], 10.00th=[11338], 20.00th=[11600], 00:10:25.609 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[13042], 00:10:25.609 | 70.00th=[13698], 80.00th=[15401], 90.00th=[17171], 95.00th=[18220], 00:10:25.609 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:10:25.609 | 99.99th=[25297] 00:10:25.609 bw ( KiB/s): min=12320, max=20480, per=21.81%, avg=16400.00, stdev=5769.99, samples=2 00:10:25.609 iops : min= 3080, max= 5120, avg=4100.00, stdev=1442.50, samples=2 00:10:25.609 lat (usec) : 750=0.01% 00:10:25.609 lat (msec) : 4=0.38%, 10=0.97%, 20=83.93%, 50=14.70% 00:10:25.609 cpu : usr=4.10%, sys=11.49%, ctx=380, majf=0, minf=7 00:10:25.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:25.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.609 issued rwts: total=4096,4228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.609 job2: (groupid=0, jobs=1): err= 0: pid=68782: Mon Jul 15 22:40:40 2024 00:10:25.609 read: IOPS=3829, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1003msec) 00:10:25.609 slat (usec): min=7, max=8330, avg=129.75, stdev=687.62 00:10:25.609 clat (usec): min=689, max=34951, avg=16977.79, stdev=5846.38 00:10:25.609 lat (usec): min=8108, max=34965, avg=17107.54, stdev=5847.00 00:10:25.609 clat percentiles (usec): 00:10:25.609 | 1.00th=[10683], 5.00th=[13304], 10.00th=[13435], 20.00th=[13829], 00:10:25.609 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:10:25.609 | 70.00th=[14746], 80.00th=[21365], 90.00th=[26870], 95.00th=[30802], 00:10:25.609 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:10:25.609 | 99.99th=[34866] 00:10:25.609 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:25.609 slat (usec): min=10, max=6048, avg=114.40, stdev=515.39 00:10:25.609 clat (usec): min=10592, max=21264, avg=14987.46, stdev=2133.16 00:10:25.609 lat (usec): min=10848, max=21292, avg=15101.86, stdev=2086.30 00:10:25.609 clat percentiles (usec): 00:10:25.609 | 1.00th=[11207], 5.00th=[13173], 10.00th=[13304], 20.00th=[13566], 00:10:25.609 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:10:25.609 | 70.00th=[14484], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:10:25.609 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:10:25.609 | 99.99th=[21365] 00:10:25.609 bw ( KiB/s): min=13304, max=19464, per=21.79%, avg=16384.00, stdev=4355.78, samples=2 00:10:25.609 iops : min= 3326, max= 4866, avg=4096.00, stdev=1088.94, samples=2 00:10:25.609 lat (usec) : 750=0.01% 00:10:25.609 lat (msec) : 10=0.40%, 20=86.91%, 50=12.67% 00:10:25.609 cpu : usr=3.89%, sys=12.18%, ctx=255, majf=0, minf=9 00:10:25.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:25.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.609 issued rwts: total=3841,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.609 job3: (groupid=0, jobs=1): err= 0: pid=68783: Mon Jul 15 22:40:40 2024 00:10:25.609 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:25.609 slat (usec): min=6, max=3186, avg=100.49, stdev=474.74 00:10:25.609 clat (usec): min=9682, max=15512, avg=13491.73, stdev=932.66 00:10:25.609 lat (usec): min=11806, max=15531, avg=13592.23, stdev=809.65 00:10:25.609 clat percentiles (usec): 00:10:25.609 | 1.00th=[10421], 5.00th=[12256], 10.00th=[12387], 20.00th=[12518], 00:10:25.609 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13829], 60.00th=[13960], 00:10:25.609 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14484], 95.00th=[14746], 00:10:25.609 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:10:25.609 | 99.99th=[15533] 00:10:25.609 write: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1002msec); 0 zone resets 00:10:25.609 slat (usec): min=11, max=3418, avg=100.27, stdev=427.26 00:10:25.609 clat (usec): min=223, max=15062, avg=12971.95, stdev=1386.37 00:10:25.609 lat (usec): min=2681, max=15513, avg=13072.22, stdev=1323.13 00:10:25.609 clat percentiles (usec): 00:10:25.609 | 1.00th=[ 5997], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:10:25.609 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[13566], 00:10:25.609 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14353], 00:10:25.609 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15008], 99.95th=[15008], 00:10:25.609 | 99.99th=[15008] 00:10:25.609 bw ( KiB/s): min=19240, max=19462, per=25.73%, avg=19351.00, stdev=156.98, samples=2 00:10:25.609 iops : min= 4810, max= 4865, avg=4837.50, stdev=38.89, samples=2 00:10:25.609 lat (usec) : 250=0.01% 00:10:25.609 lat (msec) : 4=0.33%, 10=1.13%, 20=98.53% 00:10:25.609 cpu : usr=5.29%, sys=13.29%, ctx=302, majf=0, minf=8 00:10:25.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:25.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.609 issued rwts: total=4608,4961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.609 00:10:25.609 Run status group 0 (all jobs): 00:10:25.609 READ: bw=68.8MiB/s (72.1MB/s), 15.0MiB/s-19.9MiB/s (15.7MB/s-20.9MB/s), io=69.0MiB (72.4MB), run=1002-1003msec 00:10:25.609 WRITE: bw=73.4MiB/s (77.0MB/s), 16.0MiB/s-21.7MiB/s (16.7MB/s-22.7MB/s), io=73.7MiB (77.2MB), run=1002-1003msec 00:10:25.609 00:10:25.609 Disk stats (read/write): 00:10:25.609 nvme0n1: ios=4419/4608, merge=0/0, ticks=16959/15446, in_queue=32405, util=87.54% 00:10:25.609 nvme0n2: ios=3599/4035, merge=0/0, ticks=16330/14094, in_queue=30424, util=87.14% 00:10:25.609 nvme0n3: ios=3488/3584, merge=0/0, ticks=12884/11190, in_queue=24074, util=89.19% 00:10:25.609 nvme0n4: ios=3968/4096, merge=0/0, ticks=12160/11703, in_queue=23863, util=89.75% 00:10:25.609 22:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:25.609 22:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68796 00:10:25.609 22:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:25.609 22:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:25.609 [global] 00:10:25.609 thread=1 00:10:25.609 invalidate=1 00:10:25.609 rw=read 00:10:25.609 time_based=1 00:10:25.609 runtime=10 00:10:25.609 ioengine=libaio 00:10:25.609 direct=1 00:10:25.609 bs=4096 00:10:25.609 iodepth=1 00:10:25.609 norandommap=1 00:10:25.609 numjobs=1 00:10:25.609 00:10:25.609 [job0] 00:10:25.609 filename=/dev/nvme0n1 00:10:25.609 [job1] 00:10:25.609 filename=/dev/nvme0n2 00:10:25.609 [job2] 00:10:25.609 filename=/dev/nvme0n3 00:10:25.609 [job3] 00:10:25.609 filename=/dev/nvme0n4 00:10:25.609 Could not set queue depth (nvme0n1) 00:10:25.609 Could not set queue depth (nvme0n2) 00:10:25.609 Could not set queue depth (nvme0n3) 00:10:25.609 Could not set queue depth (nvme0n4) 00:10:25.609 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.609 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.609 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.609 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.609 fio-3.35 00:10:25.609 Starting 4 threads 00:10:28.890 22:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:28.890 fio: pid=68845, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:28.890 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=60522496, buflen=4096 00:10:28.890 22:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:28.890 fio: pid=68844, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:28.890 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=46534656, buflen=4096 00:10:28.890 22:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.890 22:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:29.148 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13316096, buflen=4096 00:10:29.148 fio: pid=68842, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:29.148 22:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.148 22:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:29.407 fio: pid=68843, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:29.407 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=61419520, buflen=4096 00:10:29.664 00:10:29.664 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68842: Mon Jul 15 22:40:45 2024 00:10:29.664 read: IOPS=5806, BW=22.7MiB/s (23.8MB/s)(76.7MiB/3382msec) 00:10:29.664 slat (usec): min=10, max=11380, avg=14.90, stdev=139.92 00:10:29.664 clat (usec): min=3, max=2088, avg=156.07, stdev=23.29 00:10:29.664 lat (usec): min=139, max=11557, avg=170.97, stdev=142.50 00:10:29.664 clat percentiles (usec): 00:10:29.664 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:10:29.664 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:10:29.664 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:10:29.664 | 99.00th=[ 190], 99.50th=[ 217], 99.90th=[ 375], 99.95th=[ 474], 00:10:29.664 | 99.99th=[ 1090] 00:10:29.664 bw ( KiB/s): min=22019, max=24008, per=36.02%, avg=23304.50, stdev=813.73, samples=6 00:10:29.664 iops : min= 5504, max= 6002, avg=5826.00, stdev=203.67, samples=6 00:10:29.664 lat (usec) : 4=0.01%, 250=99.62%, 500=0.33%, 750=0.02% 00:10:29.664 lat (msec) : 2=0.01%, 4=0.01% 00:10:29.664 cpu : usr=1.36%, sys=7.04%, ctx=19648, majf=0, minf=1 00:10:29.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 issued rwts: total=19636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.664 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68843: Mon Jul 15 22:40:45 2024 00:10:29.664 read: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(58.6MiB/3757msec) 00:10:29.664 slat (usec): min=7, max=12030, avg=18.00, stdev=182.63 00:10:29.664 clat (usec): min=127, max=3008, avg=231.05, stdev=63.93 00:10:29.664 lat (usec): min=141, max=12231, avg=249.05, stdev=193.80 00:10:29.664 clat percentiles (usec): 00:10:29.664 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 157], 00:10:29.664 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:10:29.664 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 302], 00:10:29.664 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 545], 99.95th=[ 742], 00:10:29.664 | 99.99th=[ 2057] 00:10:29.664 bw ( KiB/s): min=14392, max=20678, per=23.95%, avg=15497.14, stdev=2295.53, samples=7 00:10:29.664 iops : min= 3598, max= 5169, avg=3874.14, stdev=573.73, samples=7 00:10:29.664 lat (usec) : 250=61.12%, 500=38.74%, 750=0.09%, 1000=0.02% 00:10:29.664 lat (msec) : 2=0.01%, 4=0.01% 00:10:29.664 cpu : usr=1.52%, sys=5.14%, ctx=15007, majf=0, minf=1 00:10:29.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 issued rwts: total=14996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.664 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68844: Mon Jul 15 22:40:45 2024 00:10:29.664 read: IOPS=3611, BW=14.1MiB/s (14.8MB/s)(44.4MiB/3146msec) 00:10:29.664 slat (usec): min=8, max=7761, avg=13.43, stdev=99.78 00:10:29.664 clat (usec): min=142, max=7571, avg=262.19, stdev=124.98 00:10:29.664 lat (usec): min=156, max=8046, avg=275.62, stdev=160.02 00:10:29.664 clat percentiles (usec): 00:10:29.664 | 1.00th=[ 208], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:10:29.664 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:10:29.664 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:10:29.664 | 99.00th=[ 392], 99.50th=[ 412], 99.90th=[ 775], 99.95th=[ 2442], 00:10:29.664 | 99.99th=[ 6521] 00:10:29.664 bw ( KiB/s): min=13325, max=14968, per=22.28%, avg=14412.83, stdev=578.41, samples=6 00:10:29.664 iops : min= 3331, max= 3742, avg=3603.17, stdev=144.70, samples=6 00:10:29.664 lat (usec) : 250=43.74%, 500=56.05%, 750=0.10%, 1000=0.03% 00:10:29.664 lat (msec) : 2=0.02%, 4=0.04%, 10=0.03% 00:10:29.664 cpu : usr=0.99%, sys=4.10%, ctx=11365, majf=0, minf=1 00:10:29.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 issued rwts: total=11362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.664 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68845: Mon Jul 15 22:40:45 2024 00:10:29.664 read: IOPS=5067, BW=19.8MiB/s (20.8MB/s)(57.7MiB/2916msec) 00:10:29.664 slat (nsec): min=10917, max=67023, avg=14561.66, stdev=3883.17 00:10:29.664 clat (usec): min=144, max=1959, avg=181.55, stdev=37.14 00:10:29.664 lat (usec): min=157, max=1972, avg=196.11, stdev=37.61 00:10:29.664 clat percentiles (usec): 00:10:29.664 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:10:29.664 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:29.664 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 233], 00:10:29.664 | 99.00th=[ 273], 99.50th=[ 338], 99.90th=[ 478], 99.95th=[ 611], 00:10:29.664 | 99.99th=[ 1844] 00:10:29.664 bw ( KiB/s): min=19512, max=21632, per=31.80%, avg=20575.80, stdev=894.10, samples=5 00:10:29.664 iops : min= 4878, max= 5408, avg=5143.80, stdev=223.65, samples=5 00:10:29.664 lat (usec) : 250=97.56%, 500=2.35%, 750=0.06%, 1000=0.01% 00:10:29.664 lat (msec) : 2=0.02% 00:10:29.664 cpu : usr=1.37%, sys=6.38%, ctx=14778, majf=0, minf=1 00:10:29.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.664 issued rwts: total=14777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.664 00:10:29.664 Run status group 0 (all jobs): 00:10:29.665 READ: bw=63.2MiB/s (66.2MB/s), 14.1MiB/s-22.7MiB/s (14.8MB/s-23.8MB/s), io=237MiB (249MB), run=2916-3757msec 00:10:29.665 00:10:29.665 Disk stats (read/write): 00:10:29.665 nvme0n1: ios=19564/0, merge=0/0, ticks=3121/0, in_queue=3121, util=95.39% 00:10:29.665 nvme0n2: ios=14147/0, merge=0/0, ticks=3285/0, in_queue=3285, util=95.53% 00:10:29.665 nvme0n3: ios=11258/0, merge=0/0, ticks=2799/0, in_queue=2799, util=96.27% 00:10:29.665 nvme0n4: ios=14534/0, merge=0/0, ticks=2701/0, in_queue=2701, util=96.80% 00:10:29.665 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.665 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:29.922 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.922 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:30.180 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.180 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:30.438 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.438 22:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:30.696 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.696 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68796 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.954 nvmf hotplug test: fio failed as expected 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:30.954 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.212 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.490 rmmod nvme_tcp 00:10:31.490 rmmod nvme_fabrics 00:10:31.490 rmmod nvme_keyring 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68415 ']' 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68415 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68415 ']' 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68415 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68415 00:10:31.490 killing process with pid 68415 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68415' 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68415 00:10:31.490 22:40:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68415 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.754 22:40:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:31.754 ************************************ 00:10:31.754 END TEST nvmf_fio_target 00:10:31.754 ************************************ 00:10:31.754 00:10:31.754 real 0m19.685s 00:10:31.755 user 1m14.457s 00:10:31.755 sys 0m10.114s 00:10:31.755 22:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.755 22:40:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.755 22:40:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:31.755 22:40:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:31.755 22:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:31.755 22:40:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.755 22:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.755 ************************************ 00:10:31.755 START TEST nvmf_bdevio 00:10:31.755 ************************************ 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:31.755 * Looking for test storage... 00:10:31.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:31.755 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.756 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:31.756 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:31.756 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:31.756 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:31.756 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:31.756 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:32.014 Cannot find device "nvmf_tgt_br" 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.014 Cannot find device "nvmf_tgt_br2" 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:32.014 Cannot find device "nvmf_tgt_br" 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:32.014 Cannot find device "nvmf_tgt_br2" 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:32.014 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:32.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:10:32.273 00:10:32.273 --- 10.0.0.2 ping statistics --- 00:10:32.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.273 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:32.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:10:32.273 00:10:32.273 --- 10.0.0.3 ping statistics --- 00:10:32.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.273 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:32.273 00:10:32.273 --- 10.0.0.1 ping statistics --- 00:10:32.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.273 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69106 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69106 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69106 ']' 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.273 22:40:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.273 [2024-07-15 22:40:47.732206] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:10:32.273 [2024-07-15 22:40:47.732449] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.531 [2024-07-15 22:40:47.872096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.531 [2024-07-15 22:40:47.988661] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.531 [2024-07-15 22:40:47.989127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.531 [2024-07-15 22:40:47.989555] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.531 [2024-07-15 22:40:47.990067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.531 [2024-07-15 22:40:47.990222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.531 [2024-07-15 22:40:47.990613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:32.531 [2024-07-15 22:40:47.990701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:32.531 [2024-07-15 22:40:47.991160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:32.531 [2024-07-15 22:40:47.991163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.531 [2024-07-15 22:40:48.045386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 [2024-07-15 22:40:48.824715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 Malloc0 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.467 [2024-07-15 22:40:48.887433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:33.467 { 00:10:33.467 "params": { 00:10:33.467 "name": "Nvme$subsystem", 00:10:33.467 "trtype": "$TEST_TRANSPORT", 00:10:33.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.467 "adrfam": "ipv4", 00:10:33.467 "trsvcid": "$NVMF_PORT", 00:10:33.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.467 "hdgst": ${hdgst:-false}, 00:10:33.467 "ddgst": ${ddgst:-false} 00:10:33.467 }, 00:10:33.467 "method": "bdev_nvme_attach_controller" 00:10:33.467 } 00:10:33.467 EOF 00:10:33.467 )") 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:33.467 22:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:33.468 "params": { 00:10:33.468 "name": "Nvme1", 00:10:33.468 "trtype": "tcp", 00:10:33.468 "traddr": "10.0.0.2", 00:10:33.468 "adrfam": "ipv4", 00:10:33.468 "trsvcid": "4420", 00:10:33.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.468 "hdgst": false, 00:10:33.468 "ddgst": false 00:10:33.468 }, 00:10:33.468 "method": "bdev_nvme_attach_controller" 00:10:33.468 }' 00:10:33.468 [2024-07-15 22:40:48.946474] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:10:33.468 [2024-07-15 22:40:48.946578] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69142 ] 00:10:33.727 [2024-07-15 22:40:49.084537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.727 [2024-07-15 22:40:49.217364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.727 [2024-07-15 22:40:49.217494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.727 [2024-07-15 22:40:49.217495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.727 [2024-07-15 22:40:49.283991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:33.986 I/O targets: 00:10:33.986 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:33.986 00:10:33.986 00:10:33.986 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.986 http://cunit.sourceforge.net/ 00:10:33.986 00:10:33.986 00:10:33.986 Suite: bdevio tests on: Nvme1n1 00:10:33.986 Test: blockdev write read block ...passed 00:10:33.986 Test: blockdev write zeroes read block ...passed 00:10:33.986 Test: blockdev write zeroes read no split ...passed 00:10:33.986 Test: blockdev write zeroes read split ...passed 00:10:33.986 Test: blockdev write zeroes read split partial ...passed 00:10:33.986 Test: blockdev reset ...[2024-07-15 22:40:49.431151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:33.986 [2024-07-15 22:40:49.431256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa37430 (9): Bad file descriptor 00:10:33.986 [2024-07-15 22:40:49.448069] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:33.986 passed 00:10:33.986 Test: blockdev write read 8 blocks ...passed 00:10:33.986 Test: blockdev write read size > 128k ...passed 00:10:33.986 Test: blockdev write read invalid size ...passed 00:10:33.986 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.986 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.986 Test: blockdev write read max offset ...passed 00:10:33.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.986 Test: blockdev writev readv 8 blocks ...passed 00:10:33.986 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.986 Test: blockdev writev readv block ...passed 00:10:33.986 Test: blockdev writev readv size > 128k ...passed 00:10:33.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.986 Test: blockdev comparev and writev ...[2024-07-15 22:40:49.455598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.455738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.455845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.455937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.456388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.456496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.456622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.456810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.457266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.457377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.457470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.457576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.458010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.458106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.458194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.986 [2024-07-15 22:40:49.458280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:33.986 passed 00:10:33.986 Test: blockdev nvme passthru rw ...passed 00:10:33.986 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:40:49.459173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.986 [2024-07-15 22:40:49.459294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.459498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.986 [2024-07-15 22:40:49.459608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.459811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.986 [2024-07-15 22:40:49.459906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:33.986 [2024-07-15 22:40:49.460087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.986 [2024-07-15 22:40:49.460170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:33.986 passed 00:10:33.986 Test: blockdev nvme admin passthru ...passed 00:10:33.986 Test: blockdev copy ...passed 00:10:33.986 00:10:33.986 Run Summary: Type Total Ran Passed Failed Inactive 00:10:33.986 suites 1 1 n/a 0 0 00:10:33.986 tests 23 23 23 0 0 00:10:33.986 asserts 152 152 152 0 n/a 00:10:33.986 00:10:33.986 Elapsed time = 0.143 seconds 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.245 rmmod nvme_tcp 00:10:34.245 rmmod nvme_fabrics 00:10:34.245 rmmod nvme_keyring 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69106 ']' 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69106 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69106 ']' 00:10:34.245 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69106 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69106 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:34.246 killing process with pid 69106 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69106' 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69106 00:10:34.246 22:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69106 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.504 22:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.764 22:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:34.764 00:10:34.764 real 0m2.912s 00:10:34.764 user 0m9.581s 00:10:34.764 sys 0m0.801s 00:10:34.764 22:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.764 ************************************ 00:10:34.764 END TEST nvmf_bdevio 00:10:34.764 22:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.764 ************************************ 00:10:34.764 22:40:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:34.764 22:40:50 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:34.764 22:40:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:34.764 22:40:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.764 22:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.764 ************************************ 00:10:34.764 START TEST nvmf_auth_target 00:10:34.764 ************************************ 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:34.764 * Looking for test storage... 00:10:34.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.764 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:34.765 Cannot find device "nvmf_tgt_br" 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.765 Cannot find device "nvmf_tgt_br2" 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:34.765 Cannot find device "nvmf_tgt_br" 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:34.765 Cannot find device "nvmf_tgt_br2" 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:34.765 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:35.024 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:35.024 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.024 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:35.024 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.024 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:35.024 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:35.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:35.025 00:10:35.025 --- 10.0.0.2 ping statistics --- 00:10:35.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.025 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:35.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:10:35.025 00:10:35.025 --- 10.0.0.3 ping statistics --- 00:10:35.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.025 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:35.025 00:10:35.025 --- 10.0.0.1 ping statistics --- 00:10:35.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.025 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.025 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69324 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69324 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69324 ']' 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.299 22:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69356 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea11bde15e872999da5474ecdf130dbaac19fa0a6c931a13 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oIO 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea11bde15e872999da5474ecdf130dbaac19fa0a6c931a13 0 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea11bde15e872999da5474ecdf130dbaac19fa0a6c931a13 0 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.262 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea11bde15e872999da5474ecdf130dbaac19fa0a6c931a13 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oIO 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oIO 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.oIO 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc236e8ec7320d6272b29743ff4b2c411b7027c6063f3a7cc4431aff689a7f90 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3BU 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc236e8ec7320d6272b29743ff4b2c411b7027c6063f3a7cc4431aff689a7f90 3 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc236e8ec7320d6272b29743ff4b2c411b7027c6063f3a7cc4431aff689a7f90 3 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc236e8ec7320d6272b29743ff4b2c411b7027c6063f3a7cc4431aff689a7f90 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3BU 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3BU 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.3BU 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6788274b016b94148a55724d68197cf4 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lbG 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6788274b016b94148a55724d68197cf4 1 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6788274b016b94148a55724d68197cf4 1 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6788274b016b94148a55724d68197cf4 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:36.263 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lbG 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lbG 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.lbG 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9e46aab293b737d3ec775d5f056592a230928a2b88f2b11d 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ers 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9e46aab293b737d3ec775d5f056592a230928a2b88f2b11d 2 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9e46aab293b737d3ec775d5f056592a230928a2b88f2b11d 2 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9e46aab293b737d3ec775d5f056592a230928a2b88f2b11d 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ers 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ers 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Ers 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9d47ebb3ba5ff1161984fac253974aed20e57f36deb33ba0 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Eo8 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9d47ebb3ba5ff1161984fac253974aed20e57f36deb33ba0 2 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9d47ebb3ba5ff1161984fac253974aed20e57f36deb33ba0 2 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9d47ebb3ba5ff1161984fac253974aed20e57f36deb33ba0 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Eo8 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Eo8 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Eo8 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:36.523 22:40:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=627927ba05113a9f8ca026f67a62ecab 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0NN 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 627927ba05113a9f8ca026f67a62ecab 1 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 627927ba05113a9f8ca026f67a62ecab 1 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=627927ba05113a9f8ca026f67a62ecab 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0NN 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0NN 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.0NN 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5a7959678472192109d5915ef28c933eec22c5abf0efee3c3f6d9d262de0d662 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.E2X 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5a7959678472192109d5915ef28c933eec22c5abf0efee3c3f6d9d262de0d662 3 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5a7959678472192109d5915ef28c933eec22c5abf0efee3c3f6d9d262de0d662 3 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5a7959678472192109d5915ef28c933eec22c5abf0efee3c3f6d9d262de0d662 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:36.523 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.E2X 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.E2X 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.E2X 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69324 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69324 ']' 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.782 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69356 /var/tmp/host.sock 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69356 ']' 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.041 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oIO 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oIO 00:10:37.300 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oIO 00:10:37.559 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.3BU ]] 00:10:37.559 22:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3BU 00:10:37.559 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.559 22:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.559 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.559 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3BU 00:10:37.559 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3BU 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lbG 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.lbG 00:10:37.817 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.lbG 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Ers ]] 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ers 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ers 00:10:38.074 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ers 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Eo8 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Eo8 00:10:38.331 22:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Eo8 00:10:38.599 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.0NN ]] 00:10:38.599 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0NN 00:10:38.599 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.600 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0NN 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0NN 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.E2X 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.E2X 00:10:38.889 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.E2X 00:10:39.147 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:39.147 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:39.147 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.147 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.147 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:39.147 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.405 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.663 22:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.663 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.663 22:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.921 00:10:39.921 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.921 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.921 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.181 { 00:10:40.181 "cntlid": 1, 00:10:40.181 "qid": 0, 00:10:40.181 "state": "enabled", 00:10:40.181 "listen_address": { 00:10:40.181 "trtype": "TCP", 00:10:40.181 "adrfam": "IPv4", 00:10:40.181 "traddr": "10.0.0.2", 00:10:40.181 "trsvcid": "4420" 00:10:40.181 }, 00:10:40.181 "peer_address": { 00:10:40.181 "trtype": "TCP", 00:10:40.181 "adrfam": "IPv4", 00:10:40.181 "traddr": "10.0.0.1", 00:10:40.181 "trsvcid": "53540" 00:10:40.181 }, 00:10:40.181 "auth": { 00:10:40.181 "state": "completed", 00:10:40.181 "digest": "sha256", 00:10:40.181 "dhgroup": "null" 00:10:40.181 } 00:10:40.181 } 00:10:40.181 ]' 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.181 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.440 22:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.708 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.708 22:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.708 { 00:10:45.708 "cntlid": 3, 00:10:45.708 "qid": 0, 00:10:45.708 "state": "enabled", 00:10:45.708 "listen_address": { 00:10:45.708 "trtype": "TCP", 00:10:45.708 "adrfam": "IPv4", 00:10:45.708 "traddr": "10.0.0.2", 00:10:45.708 "trsvcid": "4420" 00:10:45.708 }, 00:10:45.708 "peer_address": { 00:10:45.708 "trtype": "TCP", 00:10:45.708 "adrfam": "IPv4", 00:10:45.708 "traddr": "10.0.0.1", 00:10:45.708 "trsvcid": "53558" 00:10:45.708 }, 00:10:45.708 "auth": { 00:10:45.708 "state": "completed", 00:10:45.708 "digest": "sha256", 00:10:45.708 "dhgroup": "null" 00:10:45.708 } 00:10:45.708 } 00:10:45.708 ]' 00:10:45.708 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.974 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.245 22:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:47.180 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.439 22:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.697 00:10:47.697 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.697 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.697 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.954 { 00:10:47.954 "cntlid": 5, 00:10:47.954 "qid": 0, 00:10:47.954 "state": "enabled", 00:10:47.954 "listen_address": { 00:10:47.954 "trtype": "TCP", 00:10:47.954 "adrfam": "IPv4", 00:10:47.954 "traddr": "10.0.0.2", 00:10:47.954 "trsvcid": "4420" 00:10:47.954 }, 00:10:47.954 "peer_address": { 00:10:47.954 "trtype": "TCP", 00:10:47.954 "adrfam": "IPv4", 00:10:47.954 "traddr": "10.0.0.1", 00:10:47.954 "trsvcid": "53582" 00:10:47.954 }, 00:10:47.954 "auth": { 00:10:47.954 "state": "completed", 00:10:47.954 "digest": "sha256", 00:10:47.954 "dhgroup": "null" 00:10:47.954 } 00:10:47.954 } 00:10:47.954 ]' 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.954 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.212 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:48.212 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.212 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.212 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.212 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.470 22:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:49.036 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:49.298 22:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:49.570 00:10:49.570 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.570 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.570 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.829 { 00:10:49.829 "cntlid": 7, 00:10:49.829 "qid": 0, 00:10:49.829 "state": "enabled", 00:10:49.829 "listen_address": { 00:10:49.829 "trtype": "TCP", 00:10:49.829 "adrfam": "IPv4", 00:10:49.829 "traddr": "10.0.0.2", 00:10:49.829 "trsvcid": "4420" 00:10:49.829 }, 00:10:49.829 "peer_address": { 00:10:49.829 "trtype": "TCP", 00:10:49.829 "adrfam": "IPv4", 00:10:49.829 "traddr": "10.0.0.1", 00:10:49.829 "trsvcid": "43096" 00:10:49.829 }, 00:10:49.829 "auth": { 00:10:49.829 "state": "completed", 00:10:49.829 "digest": "sha256", 00:10:49.829 "dhgroup": "null" 00:10:49.829 } 00:10:49.829 } 00:10:49.829 ]' 00:10:49.829 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.088 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.346 22:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.281 22:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.540 00:10:51.540 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.540 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.540 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.108 { 00:10:52.108 "cntlid": 9, 00:10:52.108 "qid": 0, 00:10:52.108 "state": "enabled", 00:10:52.108 "listen_address": { 00:10:52.108 "trtype": "TCP", 00:10:52.108 "adrfam": "IPv4", 00:10:52.108 "traddr": "10.0.0.2", 00:10:52.108 "trsvcid": "4420" 00:10:52.108 }, 00:10:52.108 "peer_address": { 00:10:52.108 "trtype": "TCP", 00:10:52.108 "adrfam": "IPv4", 00:10:52.108 "traddr": "10.0.0.1", 00:10:52.108 "trsvcid": "43104" 00:10:52.108 }, 00:10:52.108 "auth": { 00:10:52.108 "state": "completed", 00:10:52.108 "digest": "sha256", 00:10:52.108 "dhgroup": "ffdhe2048" 00:10:52.108 } 00:10:52.108 } 00:10:52.108 ]' 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.108 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.367 22:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.304 22:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.872 00:10:53.872 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.872 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.872 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.132 { 00:10:54.132 "cntlid": 11, 00:10:54.132 "qid": 0, 00:10:54.132 "state": "enabled", 00:10:54.132 "listen_address": { 00:10:54.132 "trtype": "TCP", 00:10:54.132 "adrfam": "IPv4", 00:10:54.132 "traddr": "10.0.0.2", 00:10:54.132 "trsvcid": "4420" 00:10:54.132 }, 00:10:54.132 "peer_address": { 00:10:54.132 "trtype": "TCP", 00:10:54.132 "adrfam": "IPv4", 00:10:54.132 "traddr": "10.0.0.1", 00:10:54.132 "trsvcid": "43132" 00:10:54.132 }, 00:10:54.132 "auth": { 00:10:54.132 "state": "completed", 00:10:54.132 "digest": "sha256", 00:10:54.132 "dhgroup": "ffdhe2048" 00:10:54.132 } 00:10:54.132 } 00:10:54.132 ]' 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.132 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.391 22:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:10:54.959 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.959 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:54.959 22:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.959 22:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.217 22:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.217 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.217 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:55.217 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.476 22:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.735 00:10:55.735 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.735 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.735 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.994 { 00:10:55.994 "cntlid": 13, 00:10:55.994 "qid": 0, 00:10:55.994 "state": "enabled", 00:10:55.994 "listen_address": { 00:10:55.994 "trtype": "TCP", 00:10:55.994 "adrfam": "IPv4", 00:10:55.994 "traddr": "10.0.0.2", 00:10:55.994 "trsvcid": "4420" 00:10:55.994 }, 00:10:55.994 "peer_address": { 00:10:55.994 "trtype": "TCP", 00:10:55.994 "adrfam": "IPv4", 00:10:55.994 "traddr": "10.0.0.1", 00:10:55.994 "trsvcid": "43158" 00:10:55.994 }, 00:10:55.994 "auth": { 00:10:55.994 "state": "completed", 00:10:55.994 "digest": "sha256", 00:10:55.994 "dhgroup": "ffdhe2048" 00:10:55.994 } 00:10:55.994 } 00:10:55.994 ]' 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:55.994 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.253 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.253 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.253 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.512 22:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:57.146 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:57.405 22:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:57.665 00:10:57.665 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.665 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.665 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.922 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.922 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.922 22:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.922 22:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.180 { 00:10:58.180 "cntlid": 15, 00:10:58.180 "qid": 0, 00:10:58.180 "state": "enabled", 00:10:58.180 "listen_address": { 00:10:58.180 "trtype": "TCP", 00:10:58.180 "adrfam": "IPv4", 00:10:58.180 "traddr": "10.0.0.2", 00:10:58.180 "trsvcid": "4420" 00:10:58.180 }, 00:10:58.180 "peer_address": { 00:10:58.180 "trtype": "TCP", 00:10:58.180 "adrfam": "IPv4", 00:10:58.180 "traddr": "10.0.0.1", 00:10:58.180 "trsvcid": "43176" 00:10:58.180 }, 00:10:58.180 "auth": { 00:10:58.180 "state": "completed", 00:10:58.180 "digest": "sha256", 00:10:58.180 "dhgroup": "ffdhe2048" 00:10:58.180 } 00:10:58.180 } 00:10:58.180 ]' 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.180 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.438 22:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.371 22:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.629 00:10:59.885 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.885 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.885 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.143 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.143 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.143 22:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.143 22:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.143 22:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.143 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.143 { 00:11:00.143 "cntlid": 17, 00:11:00.143 "qid": 0, 00:11:00.143 "state": "enabled", 00:11:00.143 "listen_address": { 00:11:00.143 "trtype": "TCP", 00:11:00.143 "adrfam": "IPv4", 00:11:00.143 "traddr": "10.0.0.2", 00:11:00.143 "trsvcid": "4420" 00:11:00.143 }, 00:11:00.143 "peer_address": { 00:11:00.143 "trtype": "TCP", 00:11:00.143 "adrfam": "IPv4", 00:11:00.143 "traddr": "10.0.0.1", 00:11:00.143 "trsvcid": "33030" 00:11:00.143 }, 00:11:00.143 "auth": { 00:11:00.143 "state": "completed", 00:11:00.143 "digest": "sha256", 00:11:00.144 "dhgroup": "ffdhe3072" 00:11:00.144 } 00:11:00.144 } 00:11:00.144 ]' 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.144 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.418 22:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.356 22:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.923 00:11:01.923 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.923 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.923 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.181 { 00:11:02.181 "cntlid": 19, 00:11:02.181 "qid": 0, 00:11:02.181 "state": "enabled", 00:11:02.181 "listen_address": { 00:11:02.181 "trtype": "TCP", 00:11:02.181 "adrfam": "IPv4", 00:11:02.181 "traddr": "10.0.0.2", 00:11:02.181 "trsvcid": "4420" 00:11:02.181 }, 00:11:02.181 "peer_address": { 00:11:02.181 "trtype": "TCP", 00:11:02.181 "adrfam": "IPv4", 00:11:02.181 "traddr": "10.0.0.1", 00:11:02.181 "trsvcid": "33046" 00:11:02.181 }, 00:11:02.181 "auth": { 00:11:02.181 "state": "completed", 00:11:02.181 "digest": "sha256", 00:11:02.181 "dhgroup": "ffdhe3072" 00:11:02.181 } 00:11:02.181 } 00:11:02.181 ]' 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.181 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.440 22:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:03.376 22:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.635 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.894 00:11:03.894 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.894 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.894 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.152 { 00:11:04.152 "cntlid": 21, 00:11:04.152 "qid": 0, 00:11:04.152 "state": "enabled", 00:11:04.152 "listen_address": { 00:11:04.152 "trtype": "TCP", 00:11:04.152 "adrfam": "IPv4", 00:11:04.152 "traddr": "10.0.0.2", 00:11:04.152 "trsvcid": "4420" 00:11:04.152 }, 00:11:04.152 "peer_address": { 00:11:04.152 "trtype": "TCP", 00:11:04.152 "adrfam": "IPv4", 00:11:04.152 "traddr": "10.0.0.1", 00:11:04.152 "trsvcid": "33072" 00:11:04.152 }, 00:11:04.152 "auth": { 00:11:04.152 "state": "completed", 00:11:04.152 "digest": "sha256", 00:11:04.152 "dhgroup": "ffdhe3072" 00:11:04.152 } 00:11:04.152 } 00:11:04.152 ]' 00:11:04.152 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.469 22:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.727 22:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.293 22:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.861 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.862 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:06.120 00:11:06.120 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.120 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.120 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.379 { 00:11:06.379 "cntlid": 23, 00:11:06.379 "qid": 0, 00:11:06.379 "state": "enabled", 00:11:06.379 "listen_address": { 00:11:06.379 "trtype": "TCP", 00:11:06.379 "adrfam": "IPv4", 00:11:06.379 "traddr": "10.0.0.2", 00:11:06.379 "trsvcid": "4420" 00:11:06.379 }, 00:11:06.379 "peer_address": { 00:11:06.379 "trtype": "TCP", 00:11:06.379 "adrfam": "IPv4", 00:11:06.379 "traddr": "10.0.0.1", 00:11:06.379 "trsvcid": "33104" 00:11:06.379 }, 00:11:06.379 "auth": { 00:11:06.379 "state": "completed", 00:11:06.379 "digest": "sha256", 00:11:06.379 "dhgroup": "ffdhe3072" 00:11:06.379 } 00:11:06.379 } 00:11:06.379 ]' 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.379 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.638 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.638 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.638 22:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.897 22:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:07.462 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.463 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:07.463 22:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.721 22:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.721 22:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.721 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:07.721 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.721 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:07.721 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.985 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.258 00:11:08.258 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.258 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.258 22:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.515 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.516 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.516 22:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.516 22:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.516 22:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.516 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.516 { 00:11:08.516 "cntlid": 25, 00:11:08.516 "qid": 0, 00:11:08.516 "state": "enabled", 00:11:08.516 "listen_address": { 00:11:08.516 "trtype": "TCP", 00:11:08.516 "adrfam": "IPv4", 00:11:08.516 "traddr": "10.0.0.2", 00:11:08.516 "trsvcid": "4420" 00:11:08.516 }, 00:11:08.516 "peer_address": { 00:11:08.516 "trtype": "TCP", 00:11:08.516 "adrfam": "IPv4", 00:11:08.516 "traddr": "10.0.0.1", 00:11:08.516 "trsvcid": "33124" 00:11:08.516 }, 00:11:08.516 "auth": { 00:11:08.516 "state": "completed", 00:11:08.516 "digest": "sha256", 00:11:08.516 "dhgroup": "ffdhe4096" 00:11:08.516 } 00:11:08.516 } 00:11:08.516 ]' 00:11:08.516 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.774 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.032 22:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:09.968 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.968 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:09.968 22:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.969 22:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.226 22:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.226 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.226 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.483 00:11:10.483 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.483 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.483 22:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.740 { 00:11:10.740 "cntlid": 27, 00:11:10.740 "qid": 0, 00:11:10.740 "state": "enabled", 00:11:10.740 "listen_address": { 00:11:10.740 "trtype": "TCP", 00:11:10.740 "adrfam": "IPv4", 00:11:10.740 "traddr": "10.0.0.2", 00:11:10.740 "trsvcid": "4420" 00:11:10.740 }, 00:11:10.740 "peer_address": { 00:11:10.740 "trtype": "TCP", 00:11:10.740 "adrfam": "IPv4", 00:11:10.740 "traddr": "10.0.0.1", 00:11:10.740 "trsvcid": "50834" 00:11:10.740 }, 00:11:10.740 "auth": { 00:11:10.740 "state": "completed", 00:11:10.740 "digest": "sha256", 00:11:10.740 "dhgroup": "ffdhe4096" 00:11:10.740 } 00:11:10.740 } 00:11:10.740 ]' 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.740 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.998 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:10.998 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.998 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.998 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.998 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.255 22:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:11.820 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.078 22:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.644 00:11:12.644 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.644 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.644 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.902 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.902 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.902 22:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.902 22:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.902 22:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.902 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.902 { 00:11:12.902 "cntlid": 29, 00:11:12.902 "qid": 0, 00:11:12.902 "state": "enabled", 00:11:12.902 "listen_address": { 00:11:12.902 "trtype": "TCP", 00:11:12.902 "adrfam": "IPv4", 00:11:12.902 "traddr": "10.0.0.2", 00:11:12.902 "trsvcid": "4420" 00:11:12.903 }, 00:11:12.903 "peer_address": { 00:11:12.903 "trtype": "TCP", 00:11:12.903 "adrfam": "IPv4", 00:11:12.903 "traddr": "10.0.0.1", 00:11:12.903 "trsvcid": "50854" 00:11:12.903 }, 00:11:12.903 "auth": { 00:11:12.903 "state": "completed", 00:11:12.903 "digest": "sha256", 00:11:12.903 "dhgroup": "ffdhe4096" 00:11:12.903 } 00:11:12.903 } 00:11:12.903 ]' 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.903 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.468 22:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.032 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:14.289 22:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:14.548 00:11:14.548 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.548 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.548 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.114 { 00:11:15.114 "cntlid": 31, 00:11:15.114 "qid": 0, 00:11:15.114 "state": "enabled", 00:11:15.114 "listen_address": { 00:11:15.114 "trtype": "TCP", 00:11:15.114 "adrfam": "IPv4", 00:11:15.114 "traddr": "10.0.0.2", 00:11:15.114 "trsvcid": "4420" 00:11:15.114 }, 00:11:15.114 "peer_address": { 00:11:15.114 "trtype": "TCP", 00:11:15.114 "adrfam": "IPv4", 00:11:15.114 "traddr": "10.0.0.1", 00:11:15.114 "trsvcid": "50864" 00:11:15.114 }, 00:11:15.114 "auth": { 00:11:15.114 "state": "completed", 00:11:15.114 "digest": "sha256", 00:11:15.114 "dhgroup": "ffdhe4096" 00:11:15.114 } 00:11:15.114 } 00:11:15.114 ]' 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.114 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.371 22:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:15.964 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.221 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.222 22:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.787 00:11:16.787 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.787 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.787 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.045 { 00:11:17.045 "cntlid": 33, 00:11:17.045 "qid": 0, 00:11:17.045 "state": "enabled", 00:11:17.045 "listen_address": { 00:11:17.045 "trtype": "TCP", 00:11:17.045 "adrfam": "IPv4", 00:11:17.045 "traddr": "10.0.0.2", 00:11:17.045 "trsvcid": "4420" 00:11:17.045 }, 00:11:17.045 "peer_address": { 00:11:17.045 "trtype": "TCP", 00:11:17.045 "adrfam": "IPv4", 00:11:17.045 "traddr": "10.0.0.1", 00:11:17.045 "trsvcid": "50890" 00:11:17.045 }, 00:11:17.045 "auth": { 00:11:17.045 "state": "completed", 00:11:17.045 "digest": "sha256", 00:11:17.045 "dhgroup": "ffdhe6144" 00:11:17.045 } 00:11:17.045 } 00:11:17.045 ]' 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.045 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.304 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:17.304 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.304 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.304 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.304 22:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.562 22:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:18.498 22:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.756 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.015 00:11:19.015 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.015 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.015 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.273 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.273 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.273 22:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.273 22:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.273 22:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.273 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.273 { 00:11:19.273 "cntlid": 35, 00:11:19.273 "qid": 0, 00:11:19.273 "state": "enabled", 00:11:19.273 "listen_address": { 00:11:19.273 "trtype": "TCP", 00:11:19.273 "adrfam": "IPv4", 00:11:19.273 "traddr": "10.0.0.2", 00:11:19.273 "trsvcid": "4420" 00:11:19.273 }, 00:11:19.273 "peer_address": { 00:11:19.273 "trtype": "TCP", 00:11:19.273 "adrfam": "IPv4", 00:11:19.273 "traddr": "10.0.0.1", 00:11:19.273 "trsvcid": "50916" 00:11:19.273 }, 00:11:19.273 "auth": { 00:11:19.273 "state": "completed", 00:11:19.273 "digest": "sha256", 00:11:19.273 "dhgroup": "ffdhe6144" 00:11:19.273 } 00:11:19.273 } 00:11:19.273 ]' 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.532 22:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.791 22:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.726 22:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.726 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.293 00:11:21.293 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.293 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.293 22:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.607 { 00:11:21.607 "cntlid": 37, 00:11:21.607 "qid": 0, 00:11:21.607 "state": "enabled", 00:11:21.607 "listen_address": { 00:11:21.607 "trtype": "TCP", 00:11:21.607 "adrfam": "IPv4", 00:11:21.607 "traddr": "10.0.0.2", 00:11:21.607 "trsvcid": "4420" 00:11:21.607 }, 00:11:21.607 "peer_address": { 00:11:21.607 "trtype": "TCP", 00:11:21.607 "adrfam": "IPv4", 00:11:21.607 "traddr": "10.0.0.1", 00:11:21.607 "trsvcid": "35152" 00:11:21.607 }, 00:11:21.607 "auth": { 00:11:21.607 "state": "completed", 00:11:21.607 "digest": "sha256", 00:11:21.607 "dhgroup": "ffdhe6144" 00:11:21.607 } 00:11:21.607 } 00:11:21.607 ]' 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.607 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.863 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.864 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.864 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.121 22:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.685 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:22.943 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.508 00:11:23.508 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.508 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.508 22:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.765 { 00:11:23.765 "cntlid": 39, 00:11:23.765 "qid": 0, 00:11:23.765 "state": "enabled", 00:11:23.765 "listen_address": { 00:11:23.765 "trtype": "TCP", 00:11:23.765 "adrfam": "IPv4", 00:11:23.765 "traddr": "10.0.0.2", 00:11:23.765 "trsvcid": "4420" 00:11:23.765 }, 00:11:23.765 "peer_address": { 00:11:23.765 "trtype": "TCP", 00:11:23.765 "adrfam": "IPv4", 00:11:23.765 "traddr": "10.0.0.1", 00:11:23.765 "trsvcid": "35166" 00:11:23.765 }, 00:11:23.765 "auth": { 00:11:23.765 "state": "completed", 00:11:23.765 "digest": "sha256", 00:11:23.765 "dhgroup": "ffdhe6144" 00:11:23.765 } 00:11:23.765 } 00:11:23.765 ]' 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.765 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.330 22:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:24.896 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.154 22:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.089 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.089 { 00:11:26.089 "cntlid": 41, 00:11:26.089 "qid": 0, 00:11:26.089 "state": "enabled", 00:11:26.089 "listen_address": { 00:11:26.089 "trtype": "TCP", 00:11:26.089 "adrfam": "IPv4", 00:11:26.089 "traddr": "10.0.0.2", 00:11:26.089 "trsvcid": "4420" 00:11:26.089 }, 00:11:26.089 "peer_address": { 00:11:26.089 "trtype": "TCP", 00:11:26.089 "adrfam": "IPv4", 00:11:26.089 "traddr": "10.0.0.1", 00:11:26.089 "trsvcid": "35188" 00:11:26.089 }, 00:11:26.089 "auth": { 00:11:26.089 "state": "completed", 00:11:26.089 "digest": "sha256", 00:11:26.089 "dhgroup": "ffdhe8192" 00:11:26.089 } 00:11:26.089 } 00:11:26.089 ]' 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.089 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.348 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.348 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.348 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.348 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.348 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.608 22:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:27.176 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.435 22:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.369 00:11:28.369 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.369 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.369 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.625 { 00:11:28.625 "cntlid": 43, 00:11:28.625 "qid": 0, 00:11:28.625 "state": "enabled", 00:11:28.625 "listen_address": { 00:11:28.625 "trtype": "TCP", 00:11:28.625 "adrfam": "IPv4", 00:11:28.625 "traddr": "10.0.0.2", 00:11:28.625 "trsvcid": "4420" 00:11:28.625 }, 00:11:28.625 "peer_address": { 00:11:28.625 "trtype": "TCP", 00:11:28.625 "adrfam": "IPv4", 00:11:28.625 "traddr": "10.0.0.1", 00:11:28.625 "trsvcid": "35208" 00:11:28.625 }, 00:11:28.625 "auth": { 00:11:28.625 "state": "completed", 00:11:28.625 "digest": "sha256", 00:11:28.625 "dhgroup": "ffdhe8192" 00:11:28.625 } 00:11:28.625 } 00:11:28.625 ]' 00:11:28.625 22:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.626 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.882 22:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:29.812 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.069 22:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.633 00:11:30.633 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.633 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.633 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.198 { 00:11:31.198 "cntlid": 45, 00:11:31.198 "qid": 0, 00:11:31.198 "state": "enabled", 00:11:31.198 "listen_address": { 00:11:31.198 "trtype": "TCP", 00:11:31.198 "adrfam": "IPv4", 00:11:31.198 "traddr": "10.0.0.2", 00:11:31.198 "trsvcid": "4420" 00:11:31.198 }, 00:11:31.198 "peer_address": { 00:11:31.198 "trtype": "TCP", 00:11:31.198 "adrfam": "IPv4", 00:11:31.198 "traddr": "10.0.0.1", 00:11:31.198 "trsvcid": "58110" 00:11:31.198 }, 00:11:31.198 "auth": { 00:11:31.198 "state": "completed", 00:11:31.198 "digest": "sha256", 00:11:31.198 "dhgroup": "ffdhe8192" 00:11:31.198 } 00:11:31.198 } 00:11:31.198 ]' 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.198 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.456 22:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:32.022 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.281 22:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.216 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.216 22:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.474 { 00:11:33.474 "cntlid": 47, 00:11:33.474 "qid": 0, 00:11:33.474 "state": "enabled", 00:11:33.474 "listen_address": { 00:11:33.474 "trtype": "TCP", 00:11:33.474 "adrfam": "IPv4", 00:11:33.474 "traddr": "10.0.0.2", 00:11:33.474 "trsvcid": "4420" 00:11:33.474 }, 00:11:33.474 "peer_address": { 00:11:33.474 "trtype": "TCP", 00:11:33.474 "adrfam": "IPv4", 00:11:33.474 "traddr": "10.0.0.1", 00:11:33.474 "trsvcid": "58134" 00:11:33.474 }, 00:11:33.474 "auth": { 00:11:33.474 "state": "completed", 00:11:33.474 "digest": "sha256", 00:11:33.474 "dhgroup": "ffdhe8192" 00:11:33.474 } 00:11:33.474 } 00:11:33.474 ]' 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.474 22:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.732 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:34.667 22:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.667 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.232 00:11:35.232 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.232 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.232 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.490 { 00:11:35.490 "cntlid": 49, 00:11:35.490 "qid": 0, 00:11:35.490 "state": "enabled", 00:11:35.490 "listen_address": { 00:11:35.490 "trtype": "TCP", 00:11:35.490 "adrfam": "IPv4", 00:11:35.490 "traddr": "10.0.0.2", 00:11:35.490 "trsvcid": "4420" 00:11:35.490 }, 00:11:35.490 "peer_address": { 00:11:35.490 "trtype": "TCP", 00:11:35.490 "adrfam": "IPv4", 00:11:35.490 "traddr": "10.0.0.1", 00:11:35.490 "trsvcid": "58148" 00:11:35.490 }, 00:11:35.490 "auth": { 00:11:35.490 "state": "completed", 00:11:35.490 "digest": "sha384", 00:11:35.490 "dhgroup": "null" 00:11:35.490 } 00:11:35.490 } 00:11:35.490 ]' 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:35.490 22:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.490 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.490 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.490 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.749 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:36.683 22:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.941 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.245 00:11:37.245 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.245 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.245 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.503 { 00:11:37.503 "cntlid": 51, 00:11:37.503 "qid": 0, 00:11:37.503 "state": "enabled", 00:11:37.503 "listen_address": { 00:11:37.503 "trtype": "TCP", 00:11:37.503 "adrfam": "IPv4", 00:11:37.503 "traddr": "10.0.0.2", 00:11:37.503 "trsvcid": "4420" 00:11:37.503 }, 00:11:37.503 "peer_address": { 00:11:37.503 "trtype": "TCP", 00:11:37.503 "adrfam": "IPv4", 00:11:37.503 "traddr": "10.0.0.1", 00:11:37.503 "trsvcid": "58188" 00:11:37.503 }, 00:11:37.503 "auth": { 00:11:37.503 "state": "completed", 00:11:37.503 "digest": "sha384", 00:11:37.503 "dhgroup": "null" 00:11:37.503 } 00:11:37.503 } 00:11:37.503 ]' 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.503 22:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.503 22:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:37.503 22:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.762 22:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.762 22:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.762 22:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.019 22:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:38.585 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:38.843 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:38.843 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.843 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.844 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.410 00:11:39.410 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.410 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.410 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.668 22:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.668 { 00:11:39.668 "cntlid": 53, 00:11:39.668 "qid": 0, 00:11:39.668 "state": "enabled", 00:11:39.668 "listen_address": { 00:11:39.668 "trtype": "TCP", 00:11:39.668 "adrfam": "IPv4", 00:11:39.668 "traddr": "10.0.0.2", 00:11:39.668 "trsvcid": "4420" 00:11:39.668 }, 00:11:39.668 "peer_address": { 00:11:39.668 "trtype": "TCP", 00:11:39.668 "adrfam": "IPv4", 00:11:39.668 "traddr": "10.0.0.1", 00:11:39.668 "trsvcid": "58218" 00:11:39.668 }, 00:11:39.668 "auth": { 00:11:39.668 "state": "completed", 00:11:39.668 "digest": "sha384", 00:11:39.668 "dhgroup": "null" 00:11:39.668 } 00:11:39.668 } 00:11:39.668 ]' 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.668 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.234 22:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:40.802 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.059 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.316 00:11:41.316 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.316 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.316 22:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.574 { 00:11:41.574 "cntlid": 55, 00:11:41.574 "qid": 0, 00:11:41.574 "state": "enabled", 00:11:41.574 "listen_address": { 00:11:41.574 "trtype": "TCP", 00:11:41.574 "adrfam": "IPv4", 00:11:41.574 "traddr": "10.0.0.2", 00:11:41.574 "trsvcid": "4420" 00:11:41.574 }, 00:11:41.574 "peer_address": { 00:11:41.574 "trtype": "TCP", 00:11:41.574 "adrfam": "IPv4", 00:11:41.574 "traddr": "10.0.0.1", 00:11:41.574 "trsvcid": "52248" 00:11:41.574 }, 00:11:41.574 "auth": { 00:11:41.574 "state": "completed", 00:11:41.574 "digest": "sha384", 00:11:41.574 "dhgroup": "null" 00:11:41.574 } 00:11:41.574 } 00:11:41.574 ]' 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.574 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.832 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:41.832 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.832 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.832 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.832 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.090 22:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:42.655 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.913 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.479 00:11:43.479 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.479 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.479 22:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.738 { 00:11:43.738 "cntlid": 57, 00:11:43.738 "qid": 0, 00:11:43.738 "state": "enabled", 00:11:43.738 "listen_address": { 00:11:43.738 "trtype": "TCP", 00:11:43.738 "adrfam": "IPv4", 00:11:43.738 "traddr": "10.0.0.2", 00:11:43.738 "trsvcid": "4420" 00:11:43.738 }, 00:11:43.738 "peer_address": { 00:11:43.738 "trtype": "TCP", 00:11:43.738 "adrfam": "IPv4", 00:11:43.738 "traddr": "10.0.0.1", 00:11:43.738 "trsvcid": "52254" 00:11:43.738 }, 00:11:43.738 "auth": { 00:11:43.738 "state": "completed", 00:11:43.738 "digest": "sha384", 00:11:43.738 "dhgroup": "ffdhe2048" 00:11:43.738 } 00:11:43.738 } 00:11:43.738 ]' 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.738 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.996 22:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:44.932 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.191 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.449 00:11:45.449 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.449 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.449 22:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.707 { 00:11:45.707 "cntlid": 59, 00:11:45.707 "qid": 0, 00:11:45.707 "state": "enabled", 00:11:45.707 "listen_address": { 00:11:45.707 "trtype": "TCP", 00:11:45.707 "adrfam": "IPv4", 00:11:45.707 "traddr": "10.0.0.2", 00:11:45.707 "trsvcid": "4420" 00:11:45.707 }, 00:11:45.707 "peer_address": { 00:11:45.707 "trtype": "TCP", 00:11:45.707 "adrfam": "IPv4", 00:11:45.707 "traddr": "10.0.0.1", 00:11:45.707 "trsvcid": "52292" 00:11:45.707 }, 00:11:45.707 "auth": { 00:11:45.707 "state": "completed", 00:11:45.707 "digest": "sha384", 00:11:45.707 "dhgroup": "ffdhe2048" 00:11:45.707 } 00:11:45.707 } 00:11:45.707 ]' 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.707 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.708 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.967 22:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:46.900 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.157 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.415 00:11:47.415 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.415 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.415 22:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.673 { 00:11:47.673 "cntlid": 61, 00:11:47.673 "qid": 0, 00:11:47.673 "state": "enabled", 00:11:47.673 "listen_address": { 00:11:47.673 "trtype": "TCP", 00:11:47.673 "adrfam": "IPv4", 00:11:47.673 "traddr": "10.0.0.2", 00:11:47.673 "trsvcid": "4420" 00:11:47.673 }, 00:11:47.673 "peer_address": { 00:11:47.673 "trtype": "TCP", 00:11:47.673 "adrfam": "IPv4", 00:11:47.673 "traddr": "10.0.0.1", 00:11:47.673 "trsvcid": "52314" 00:11:47.673 }, 00:11:47.673 "auth": { 00:11:47.673 "state": "completed", 00:11:47.673 "digest": "sha384", 00:11:47.673 "dhgroup": "ffdhe2048" 00:11:47.673 } 00:11:47.673 } 00:11:47.673 ]' 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.673 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.931 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.931 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.931 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.931 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.931 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.188 22:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.120 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.685 00:11:49.685 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.685 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.685 22:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.685 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.685 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.685 22:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.685 22:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.950 22:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.950 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.950 { 00:11:49.950 "cntlid": 63, 00:11:49.950 "qid": 0, 00:11:49.950 "state": "enabled", 00:11:49.950 "listen_address": { 00:11:49.950 "trtype": "TCP", 00:11:49.950 "adrfam": "IPv4", 00:11:49.950 "traddr": "10.0.0.2", 00:11:49.950 "trsvcid": "4420" 00:11:49.950 }, 00:11:49.950 "peer_address": { 00:11:49.950 "trtype": "TCP", 00:11:49.950 "adrfam": "IPv4", 00:11:49.950 "traddr": "10.0.0.1", 00:11:49.950 "trsvcid": "39596" 00:11:49.950 }, 00:11:49.950 "auth": { 00:11:49.951 "state": "completed", 00:11:49.951 "digest": "sha384", 00:11:49.951 "dhgroup": "ffdhe2048" 00:11:49.951 } 00:11:49.951 } 00:11:49.951 ]' 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.951 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.236 22:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:50.837 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.095 22:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.354 22:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.354 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.354 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.612 00:11:51.612 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.612 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.612 22:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.870 { 00:11:51.870 "cntlid": 65, 00:11:51.870 "qid": 0, 00:11:51.870 "state": "enabled", 00:11:51.870 "listen_address": { 00:11:51.870 "trtype": "TCP", 00:11:51.870 "adrfam": "IPv4", 00:11:51.870 "traddr": "10.0.0.2", 00:11:51.870 "trsvcid": "4420" 00:11:51.870 }, 00:11:51.870 "peer_address": { 00:11:51.870 "trtype": "TCP", 00:11:51.870 "adrfam": "IPv4", 00:11:51.870 "traddr": "10.0.0.1", 00:11:51.870 "trsvcid": "39620" 00:11:51.870 }, 00:11:51.870 "auth": { 00:11:51.870 "state": "completed", 00:11:51.870 "digest": "sha384", 00:11:51.870 "dhgroup": "ffdhe3072" 00:11:51.870 } 00:11:51.870 } 00:11:51.870 ]' 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.870 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.437 22:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:53.004 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.263 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.522 00:11:53.522 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.522 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.522 22:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.780 { 00:11:53.780 "cntlid": 67, 00:11:53.780 "qid": 0, 00:11:53.780 "state": "enabled", 00:11:53.780 "listen_address": { 00:11:53.780 "trtype": "TCP", 00:11:53.780 "adrfam": "IPv4", 00:11:53.780 "traddr": "10.0.0.2", 00:11:53.780 "trsvcid": "4420" 00:11:53.780 }, 00:11:53.780 "peer_address": { 00:11:53.780 "trtype": "TCP", 00:11:53.780 "adrfam": "IPv4", 00:11:53.780 "traddr": "10.0.0.1", 00:11:53.780 "trsvcid": "39626" 00:11:53.780 }, 00:11:53.780 "auth": { 00:11:53.780 "state": "completed", 00:11:53.780 "digest": "sha384", 00:11:53.780 "dhgroup": "ffdhe3072" 00:11:53.780 } 00:11:53.780 } 00:11:53.780 ]' 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.780 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.038 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.038 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.038 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.038 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.038 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.298 22:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.864 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.167 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.425 00:11:55.425 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.425 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.425 22:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.683 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.683 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.683 22:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.683 22:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.942 { 00:11:55.942 "cntlid": 69, 00:11:55.942 "qid": 0, 00:11:55.942 "state": "enabled", 00:11:55.942 "listen_address": { 00:11:55.942 "trtype": "TCP", 00:11:55.942 "adrfam": "IPv4", 00:11:55.942 "traddr": "10.0.0.2", 00:11:55.942 "trsvcid": "4420" 00:11:55.942 }, 00:11:55.942 "peer_address": { 00:11:55.942 "trtype": "TCP", 00:11:55.942 "adrfam": "IPv4", 00:11:55.942 "traddr": "10.0.0.1", 00:11:55.942 "trsvcid": "39648" 00:11:55.942 }, 00:11:55.942 "auth": { 00:11:55.942 "state": "completed", 00:11:55.942 "digest": "sha384", 00:11:55.942 "dhgroup": "ffdhe3072" 00:11:55.942 } 00:11:55.942 } 00:11:55.942 ]' 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.942 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.201 22:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:11:56.767 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:56.768 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.028 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.595 00:11:57.595 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.595 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.595 22:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.595 { 00:11:57.595 "cntlid": 71, 00:11:57.595 "qid": 0, 00:11:57.595 "state": "enabled", 00:11:57.595 "listen_address": { 00:11:57.595 "trtype": "TCP", 00:11:57.595 "adrfam": "IPv4", 00:11:57.595 "traddr": "10.0.0.2", 00:11:57.595 "trsvcid": "4420" 00:11:57.595 }, 00:11:57.595 "peer_address": { 00:11:57.595 "trtype": "TCP", 00:11:57.595 "adrfam": "IPv4", 00:11:57.595 "traddr": "10.0.0.1", 00:11:57.595 "trsvcid": "39678" 00:11:57.595 }, 00:11:57.595 "auth": { 00:11:57.595 "state": "completed", 00:11:57.595 "digest": "sha384", 00:11:57.595 "dhgroup": "ffdhe3072" 00:11:57.595 } 00:11:57.595 } 00:11:57.595 ]' 00:11:57.595 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.856 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.116 22:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:58.682 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.941 22:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.199 22:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.199 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.199 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.492 00:11:59.492 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.492 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.492 22:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.750 { 00:11:59.750 "cntlid": 73, 00:11:59.750 "qid": 0, 00:11:59.750 "state": "enabled", 00:11:59.750 "listen_address": { 00:11:59.750 "trtype": "TCP", 00:11:59.750 "adrfam": "IPv4", 00:11:59.750 "traddr": "10.0.0.2", 00:11:59.750 "trsvcid": "4420" 00:11:59.750 }, 00:11:59.750 "peer_address": { 00:11:59.750 "trtype": "TCP", 00:11:59.750 "adrfam": "IPv4", 00:11:59.750 "traddr": "10.0.0.1", 00:11:59.750 "trsvcid": "39688" 00:11:59.750 }, 00:11:59.750 "auth": { 00:11:59.750 "state": "completed", 00:11:59.750 "digest": "sha384", 00:11:59.750 "dhgroup": "ffdhe4096" 00:11:59.750 } 00:11:59.750 } 00:11:59.750 ]' 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.750 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.317 22:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:00.884 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.143 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.402 00:12:01.661 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.661 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.661 22:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.919 { 00:12:01.919 "cntlid": 75, 00:12:01.919 "qid": 0, 00:12:01.919 "state": "enabled", 00:12:01.919 "listen_address": { 00:12:01.919 "trtype": "TCP", 00:12:01.919 "adrfam": "IPv4", 00:12:01.919 "traddr": "10.0.0.2", 00:12:01.919 "trsvcid": "4420" 00:12:01.919 }, 00:12:01.919 "peer_address": { 00:12:01.919 "trtype": "TCP", 00:12:01.919 "adrfam": "IPv4", 00:12:01.919 "traddr": "10.0.0.1", 00:12:01.919 "trsvcid": "40072" 00:12:01.919 }, 00:12:01.919 "auth": { 00:12:01.919 "state": "completed", 00:12:01.919 "digest": "sha384", 00:12:01.919 "dhgroup": "ffdhe4096" 00:12:01.919 } 00:12:01.919 } 00:12:01.919 ]' 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.919 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.178 22:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:03.116 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.396 22:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.655 00:12:03.655 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.655 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.655 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.913 { 00:12:03.913 "cntlid": 77, 00:12:03.913 "qid": 0, 00:12:03.913 "state": "enabled", 00:12:03.913 "listen_address": { 00:12:03.913 "trtype": "TCP", 00:12:03.913 "adrfam": "IPv4", 00:12:03.913 "traddr": "10.0.0.2", 00:12:03.913 "trsvcid": "4420" 00:12:03.913 }, 00:12:03.913 "peer_address": { 00:12:03.913 "trtype": "TCP", 00:12:03.913 "adrfam": "IPv4", 00:12:03.913 "traddr": "10.0.0.1", 00:12:03.913 "trsvcid": "40090" 00:12:03.913 }, 00:12:03.913 "auth": { 00:12:03.913 "state": "completed", 00:12:03.913 "digest": "sha384", 00:12:03.913 "dhgroup": "ffdhe4096" 00:12:03.913 } 00:12:03.913 } 00:12:03.913 ]' 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.913 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.172 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.172 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.172 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.432 22:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:05.001 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.260 22:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.519 00:12:05.519 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.519 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.519 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.778 { 00:12:05.778 "cntlid": 79, 00:12:05.778 "qid": 0, 00:12:05.778 "state": "enabled", 00:12:05.778 "listen_address": { 00:12:05.778 "trtype": "TCP", 00:12:05.778 "adrfam": "IPv4", 00:12:05.778 "traddr": "10.0.0.2", 00:12:05.778 "trsvcid": "4420" 00:12:05.778 }, 00:12:05.778 "peer_address": { 00:12:05.778 "trtype": "TCP", 00:12:05.778 "adrfam": "IPv4", 00:12:05.778 "traddr": "10.0.0.1", 00:12:05.778 "trsvcid": "40118" 00:12:05.778 }, 00:12:05.778 "auth": { 00:12:05.778 "state": "completed", 00:12:05.778 "digest": "sha384", 00:12:05.778 "dhgroup": "ffdhe4096" 00:12:05.778 } 00:12:05.778 } 00:12:05.778 ]' 00:12:05.778 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.037 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.295 22:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:07.232 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.490 22:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.056 00:12:08.056 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.056 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.056 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.314 { 00:12:08.314 "cntlid": 81, 00:12:08.314 "qid": 0, 00:12:08.314 "state": "enabled", 00:12:08.314 "listen_address": { 00:12:08.314 "trtype": "TCP", 00:12:08.314 "adrfam": "IPv4", 00:12:08.314 "traddr": "10.0.0.2", 00:12:08.314 "trsvcid": "4420" 00:12:08.314 }, 00:12:08.314 "peer_address": { 00:12:08.314 "trtype": "TCP", 00:12:08.314 "adrfam": "IPv4", 00:12:08.314 "traddr": "10.0.0.1", 00:12:08.314 "trsvcid": "40152" 00:12:08.314 }, 00:12:08.314 "auth": { 00:12:08.314 "state": "completed", 00:12:08.314 "digest": "sha384", 00:12:08.314 "dhgroup": "ffdhe6144" 00:12:08.314 } 00:12:08.314 } 00:12:08.314 ]' 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.314 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.315 22:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.574 22:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:09.510 22:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.768 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.335 00:12:10.335 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.335 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.335 22:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.593 { 00:12:10.593 "cntlid": 83, 00:12:10.593 "qid": 0, 00:12:10.593 "state": "enabled", 00:12:10.593 "listen_address": { 00:12:10.593 "trtype": "TCP", 00:12:10.593 "adrfam": "IPv4", 00:12:10.593 "traddr": "10.0.0.2", 00:12:10.593 "trsvcid": "4420" 00:12:10.593 }, 00:12:10.593 "peer_address": { 00:12:10.593 "trtype": "TCP", 00:12:10.593 "adrfam": "IPv4", 00:12:10.593 "traddr": "10.0.0.1", 00:12:10.593 "trsvcid": "46452" 00:12:10.593 }, 00:12:10.593 "auth": { 00:12:10.593 "state": "completed", 00:12:10.593 "digest": "sha384", 00:12:10.593 "dhgroup": "ffdhe6144" 00:12:10.593 } 00:12:10.593 } 00:12:10.593 ]' 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.593 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.594 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.594 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.159 22:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.725 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.982 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.983 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.548 00:12:12.548 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.548 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.548 22:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.548 { 00:12:12.548 "cntlid": 85, 00:12:12.548 "qid": 0, 00:12:12.548 "state": "enabled", 00:12:12.548 "listen_address": { 00:12:12.548 "trtype": "TCP", 00:12:12.548 "adrfam": "IPv4", 00:12:12.548 "traddr": "10.0.0.2", 00:12:12.548 "trsvcid": "4420" 00:12:12.548 }, 00:12:12.548 "peer_address": { 00:12:12.548 "trtype": "TCP", 00:12:12.548 "adrfam": "IPv4", 00:12:12.548 "traddr": "10.0.0.1", 00:12:12.548 "trsvcid": "46492" 00:12:12.548 }, 00:12:12.548 "auth": { 00:12:12.548 "state": "completed", 00:12:12.548 "digest": "sha384", 00:12:12.548 "dhgroup": "ffdhe6144" 00:12:12.548 } 00:12:12.548 } 00:12:12.548 ]' 00:12:12.548 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.806 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.806 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.807 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.807 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.807 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.807 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.807 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.064 22:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.628 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.886 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:13.886 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.886 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:13.886 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.887 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.453 00:12:14.453 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.453 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.453 22:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.711 { 00:12:14.711 "cntlid": 87, 00:12:14.711 "qid": 0, 00:12:14.711 "state": "enabled", 00:12:14.711 "listen_address": { 00:12:14.711 "trtype": "TCP", 00:12:14.711 "adrfam": "IPv4", 00:12:14.711 "traddr": "10.0.0.2", 00:12:14.711 "trsvcid": "4420" 00:12:14.711 }, 00:12:14.711 "peer_address": { 00:12:14.711 "trtype": "TCP", 00:12:14.711 "adrfam": "IPv4", 00:12:14.711 "traddr": "10.0.0.1", 00:12:14.711 "trsvcid": "46514" 00:12:14.711 }, 00:12:14.711 "auth": { 00:12:14.711 "state": "completed", 00:12:14.711 "digest": "sha384", 00:12:14.711 "dhgroup": "ffdhe6144" 00:12:14.711 } 00:12:14.711 } 00:12:14.711 ]' 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.711 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.969 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.969 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.970 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.970 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.970 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.286 22:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:15.867 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.125 22:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.690 00:12:16.690 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.690 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.690 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.948 { 00:12:16.948 "cntlid": 89, 00:12:16.948 "qid": 0, 00:12:16.948 "state": "enabled", 00:12:16.948 "listen_address": { 00:12:16.948 "trtype": "TCP", 00:12:16.948 "adrfam": "IPv4", 00:12:16.948 "traddr": "10.0.0.2", 00:12:16.948 "trsvcid": "4420" 00:12:16.948 }, 00:12:16.948 "peer_address": { 00:12:16.948 "trtype": "TCP", 00:12:16.948 "adrfam": "IPv4", 00:12:16.948 "traddr": "10.0.0.1", 00:12:16.948 "trsvcid": "46542" 00:12:16.948 }, 00:12:16.948 "auth": { 00:12:16.948 "state": "completed", 00:12:16.948 "digest": "sha384", 00:12:16.948 "dhgroup": "ffdhe8192" 00:12:16.948 } 00:12:16.948 } 00:12:16.948 ]' 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.948 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.206 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.206 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.206 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.206 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.206 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.464 22:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:18.397 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.655 22:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.220 00:12:19.220 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.220 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.220 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.550 { 00:12:19.550 "cntlid": 91, 00:12:19.550 "qid": 0, 00:12:19.550 "state": "enabled", 00:12:19.550 "listen_address": { 00:12:19.550 "trtype": "TCP", 00:12:19.550 "adrfam": "IPv4", 00:12:19.550 "traddr": "10.0.0.2", 00:12:19.550 "trsvcid": "4420" 00:12:19.550 }, 00:12:19.550 "peer_address": { 00:12:19.550 "trtype": "TCP", 00:12:19.550 "adrfam": "IPv4", 00:12:19.550 "traddr": "10.0.0.1", 00:12:19.550 "trsvcid": "46566" 00:12:19.550 }, 00:12:19.550 "auth": { 00:12:19.550 "state": "completed", 00:12:19.550 "digest": "sha384", 00:12:19.550 "dhgroup": "ffdhe8192" 00:12:19.550 } 00:12:19.550 } 00:12:19.550 ]' 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.550 22:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.550 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:19.550 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.550 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.550 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.550 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.808 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:20.743 22:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:20.743 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.001 22:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.567 00:12:21.567 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.567 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.567 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.826 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.826 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.826 22:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.826 22:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 22:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.826 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.827 { 00:12:21.827 "cntlid": 93, 00:12:21.827 "qid": 0, 00:12:21.827 "state": "enabled", 00:12:21.827 "listen_address": { 00:12:21.827 "trtype": "TCP", 00:12:21.827 "adrfam": "IPv4", 00:12:21.827 "traddr": "10.0.0.2", 00:12:21.827 "trsvcid": "4420" 00:12:21.827 }, 00:12:21.827 "peer_address": { 00:12:21.827 "trtype": "TCP", 00:12:21.827 "adrfam": "IPv4", 00:12:21.827 "traddr": "10.0.0.1", 00:12:21.827 "trsvcid": "54926" 00:12:21.827 }, 00:12:21.827 "auth": { 00:12:21.827 "state": "completed", 00:12:21.827 "digest": "sha384", 00:12:21.827 "dhgroup": "ffdhe8192" 00:12:21.827 } 00:12:21.827 } 00:12:21.827 ]' 00:12:21.827 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.827 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.827 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.085 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:22.085 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.085 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.085 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.085 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.344 22:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:22.911 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.170 22:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.428 22:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.429 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.429 22:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.044 00:12:24.044 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.044 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.044 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.303 { 00:12:24.303 "cntlid": 95, 00:12:24.303 "qid": 0, 00:12:24.303 "state": "enabled", 00:12:24.303 "listen_address": { 00:12:24.303 "trtype": "TCP", 00:12:24.303 "adrfam": "IPv4", 00:12:24.303 "traddr": "10.0.0.2", 00:12:24.303 "trsvcid": "4420" 00:12:24.303 }, 00:12:24.303 "peer_address": { 00:12:24.303 "trtype": "TCP", 00:12:24.303 "adrfam": "IPv4", 00:12:24.303 "traddr": "10.0.0.1", 00:12:24.303 "trsvcid": "54954" 00:12:24.303 }, 00:12:24.303 "auth": { 00:12:24.303 "state": "completed", 00:12:24.303 "digest": "sha384", 00:12:24.303 "dhgroup": "ffdhe8192" 00:12:24.303 } 00:12:24.303 } 00:12:24.303 ]' 00:12:24.303 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.304 22:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.869 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:25.436 22:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.694 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.953 00:12:25.953 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.953 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.953 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.212 { 00:12:26.212 "cntlid": 97, 00:12:26.212 "qid": 0, 00:12:26.212 "state": "enabled", 00:12:26.212 "listen_address": { 00:12:26.212 "trtype": "TCP", 00:12:26.212 "adrfam": "IPv4", 00:12:26.212 "traddr": "10.0.0.2", 00:12:26.212 "trsvcid": "4420" 00:12:26.212 }, 00:12:26.212 "peer_address": { 00:12:26.212 "trtype": "TCP", 00:12:26.212 "adrfam": "IPv4", 00:12:26.212 "traddr": "10.0.0.1", 00:12:26.212 "trsvcid": "54982" 00:12:26.212 }, 00:12:26.212 "auth": { 00:12:26.212 "state": "completed", 00:12:26.212 "digest": "sha512", 00:12:26.212 "dhgroup": "null" 00:12:26.212 } 00:12:26.212 } 00:12:26.212 ]' 00:12:26.212 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.470 22:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.728 22:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:27.675 22:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.675 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.240 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.240 { 00:12:28.240 "cntlid": 99, 00:12:28.240 "qid": 0, 00:12:28.240 "state": "enabled", 00:12:28.240 "listen_address": { 00:12:28.240 "trtype": "TCP", 00:12:28.240 "adrfam": "IPv4", 00:12:28.240 "traddr": "10.0.0.2", 00:12:28.240 "trsvcid": "4420" 00:12:28.240 }, 00:12:28.240 "peer_address": { 00:12:28.240 "trtype": "TCP", 00:12:28.240 "adrfam": "IPv4", 00:12:28.240 "traddr": "10.0.0.1", 00:12:28.240 "trsvcid": "55008" 00:12:28.240 }, 00:12:28.240 "auth": { 00:12:28.240 "state": "completed", 00:12:28.240 "digest": "sha512", 00:12:28.240 "dhgroup": "null" 00:12:28.240 } 00:12:28.240 } 00:12:28.240 ]' 00:12:28.240 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.498 22:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.755 22:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:29.320 22:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.320 22:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:29.320 22:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.321 22:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.321 22:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.321 22:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.321 22:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.321 22:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.578 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.143 00:12:30.143 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.143 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.143 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.401 { 00:12:30.401 "cntlid": 101, 00:12:30.401 "qid": 0, 00:12:30.401 "state": "enabled", 00:12:30.401 "listen_address": { 00:12:30.401 "trtype": "TCP", 00:12:30.401 "adrfam": "IPv4", 00:12:30.401 "traddr": "10.0.0.2", 00:12:30.401 "trsvcid": "4420" 00:12:30.401 }, 00:12:30.401 "peer_address": { 00:12:30.401 "trtype": "TCP", 00:12:30.401 "adrfam": "IPv4", 00:12:30.401 "traddr": "10.0.0.1", 00:12:30.401 "trsvcid": "45276" 00:12:30.401 }, 00:12:30.401 "auth": { 00:12:30.401 "state": "completed", 00:12:30.401 "digest": "sha512", 00:12:30.401 "dhgroup": "null" 00:12:30.401 } 00:12:30.401 } 00:12:30.401 ]' 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.401 22:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.659 22:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:31.594 22:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.594 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.852 00:12:32.111 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.111 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.111 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.370 { 00:12:32.370 "cntlid": 103, 00:12:32.370 "qid": 0, 00:12:32.370 "state": "enabled", 00:12:32.370 "listen_address": { 00:12:32.370 "trtype": "TCP", 00:12:32.370 "adrfam": "IPv4", 00:12:32.370 "traddr": "10.0.0.2", 00:12:32.370 "trsvcid": "4420" 00:12:32.370 }, 00:12:32.370 "peer_address": { 00:12:32.370 "trtype": "TCP", 00:12:32.370 "adrfam": "IPv4", 00:12:32.370 "traddr": "10.0.0.1", 00:12:32.370 "trsvcid": "45310" 00:12:32.370 }, 00:12:32.370 "auth": { 00:12:32.370 "state": "completed", 00:12:32.370 "digest": "sha512", 00:12:32.370 "dhgroup": "null" 00:12:32.370 } 00:12:32.370 } 00:12:32.370 ]' 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.370 22:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.629 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:33.567 22:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.567 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.133 00:12:34.133 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.133 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.133 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.391 { 00:12:34.391 "cntlid": 105, 00:12:34.391 "qid": 0, 00:12:34.391 "state": "enabled", 00:12:34.391 "listen_address": { 00:12:34.391 "trtype": "TCP", 00:12:34.391 "adrfam": "IPv4", 00:12:34.391 "traddr": "10.0.0.2", 00:12:34.391 "trsvcid": "4420" 00:12:34.391 }, 00:12:34.391 "peer_address": { 00:12:34.391 "trtype": "TCP", 00:12:34.391 "adrfam": "IPv4", 00:12:34.391 "traddr": "10.0.0.1", 00:12:34.391 "trsvcid": "45332" 00:12:34.391 }, 00:12:34.391 "auth": { 00:12:34.391 "state": "completed", 00:12:34.391 "digest": "sha512", 00:12:34.391 "dhgroup": "ffdhe2048" 00:12:34.391 } 00:12:34.391 } 00:12:34.391 ]' 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.391 22:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.976 22:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.544 22:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.803 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.062 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.321 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.321 { 00:12:36.321 "cntlid": 107, 00:12:36.321 "qid": 0, 00:12:36.321 "state": "enabled", 00:12:36.321 "listen_address": { 00:12:36.321 "trtype": "TCP", 00:12:36.321 "adrfam": "IPv4", 00:12:36.321 "traddr": "10.0.0.2", 00:12:36.321 "trsvcid": "4420" 00:12:36.321 }, 00:12:36.321 "peer_address": { 00:12:36.321 "trtype": "TCP", 00:12:36.321 "adrfam": "IPv4", 00:12:36.321 "traddr": "10.0.0.1", 00:12:36.321 "trsvcid": "45352" 00:12:36.321 }, 00:12:36.321 "auth": { 00:12:36.321 "state": "completed", 00:12:36.321 "digest": "sha512", 00:12:36.321 "dhgroup": "ffdhe2048" 00:12:36.321 } 00:12:36.321 } 00:12:36.321 ]' 00:12:36.580 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.580 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.580 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.580 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.580 22:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.580 22:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.580 22:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.580 22:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.839 22:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:37.775 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:38.034 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:38.034 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.034 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.034 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.035 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.293 00:12:38.293 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.293 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.293 22:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.551 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.551 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.551 22:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.551 22:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.551 22:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.551 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.551 { 00:12:38.551 "cntlid": 109, 00:12:38.551 "qid": 0, 00:12:38.551 "state": "enabled", 00:12:38.551 "listen_address": { 00:12:38.552 "trtype": "TCP", 00:12:38.552 "adrfam": "IPv4", 00:12:38.552 "traddr": "10.0.0.2", 00:12:38.552 "trsvcid": "4420" 00:12:38.552 }, 00:12:38.552 "peer_address": { 00:12:38.552 "trtype": "TCP", 00:12:38.552 "adrfam": "IPv4", 00:12:38.552 "traddr": "10.0.0.1", 00:12:38.552 "trsvcid": "45380" 00:12:38.552 }, 00:12:38.552 "auth": { 00:12:38.552 "state": "completed", 00:12:38.552 "digest": "sha512", 00:12:38.552 "dhgroup": "ffdhe2048" 00:12:38.552 } 00:12:38.552 } 00:12:38.552 ]' 00:12:38.552 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.552 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.552 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.810 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.810 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.810 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.810 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.810 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.069 22:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.636 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.896 22:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 22:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.155 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.155 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.414 00:12:40.414 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.414 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.414 22:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.673 { 00:12:40.673 "cntlid": 111, 00:12:40.673 "qid": 0, 00:12:40.673 "state": "enabled", 00:12:40.673 "listen_address": { 00:12:40.673 "trtype": "TCP", 00:12:40.673 "adrfam": "IPv4", 00:12:40.673 "traddr": "10.0.0.2", 00:12:40.673 "trsvcid": "4420" 00:12:40.673 }, 00:12:40.673 "peer_address": { 00:12:40.673 "trtype": "TCP", 00:12:40.673 "adrfam": "IPv4", 00:12:40.673 "traddr": "10.0.0.1", 00:12:40.673 "trsvcid": "49366" 00:12:40.673 }, 00:12:40.673 "auth": { 00:12:40.673 "state": "completed", 00:12:40.673 "digest": "sha512", 00:12:40.673 "dhgroup": "ffdhe2048" 00:12:40.673 } 00:12:40.673 } 00:12:40.673 ]' 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.673 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.931 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.931 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.931 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.190 22:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:41.757 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.016 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.274 00:12:42.274 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.274 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.274 22:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.533 { 00:12:42.533 "cntlid": 113, 00:12:42.533 "qid": 0, 00:12:42.533 "state": "enabled", 00:12:42.533 "listen_address": { 00:12:42.533 "trtype": "TCP", 00:12:42.533 "adrfam": "IPv4", 00:12:42.533 "traddr": "10.0.0.2", 00:12:42.533 "trsvcid": "4420" 00:12:42.533 }, 00:12:42.533 "peer_address": { 00:12:42.533 "trtype": "TCP", 00:12:42.533 "adrfam": "IPv4", 00:12:42.533 "traddr": "10.0.0.1", 00:12:42.533 "trsvcid": "49380" 00:12:42.533 }, 00:12:42.533 "auth": { 00:12:42.533 "state": "completed", 00:12:42.533 "digest": "sha512", 00:12:42.533 "dhgroup": "ffdhe3072" 00:12:42.533 } 00:12:42.533 } 00:12:42.533 ]' 00:12:42.533 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.791 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.050 22:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.617 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.889 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.468 00:12:44.468 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.468 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.468 22:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.726 { 00:12:44.726 "cntlid": 115, 00:12:44.726 "qid": 0, 00:12:44.726 "state": "enabled", 00:12:44.726 "listen_address": { 00:12:44.726 "trtype": "TCP", 00:12:44.726 "adrfam": "IPv4", 00:12:44.726 "traddr": "10.0.0.2", 00:12:44.726 "trsvcid": "4420" 00:12:44.726 }, 00:12:44.726 "peer_address": { 00:12:44.726 "trtype": "TCP", 00:12:44.726 "adrfam": "IPv4", 00:12:44.726 "traddr": "10.0.0.1", 00:12:44.726 "trsvcid": "49416" 00:12:44.726 }, 00:12:44.726 "auth": { 00:12:44.726 "state": "completed", 00:12:44.726 "digest": "sha512", 00:12:44.726 "dhgroup": "ffdhe3072" 00:12:44.726 } 00:12:44.726 } 00:12:44.726 ]' 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.726 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.985 22:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:45.551 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.551 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:45.551 22:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.551 22:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.810 22:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.810 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.810 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:45.810 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.068 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.325 00:12:46.325 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.325 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.325 22:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.584 { 00:12:46.584 "cntlid": 117, 00:12:46.584 "qid": 0, 00:12:46.584 "state": "enabled", 00:12:46.584 "listen_address": { 00:12:46.584 "trtype": "TCP", 00:12:46.584 "adrfam": "IPv4", 00:12:46.584 "traddr": "10.0.0.2", 00:12:46.584 "trsvcid": "4420" 00:12:46.584 }, 00:12:46.584 "peer_address": { 00:12:46.584 "trtype": "TCP", 00:12:46.584 "adrfam": "IPv4", 00:12:46.584 "traddr": "10.0.0.1", 00:12:46.584 "trsvcid": "49448" 00:12:46.584 }, 00:12:46.584 "auth": { 00:12:46.584 "state": "completed", 00:12:46.584 "digest": "sha512", 00:12:46.584 "dhgroup": "ffdhe3072" 00:12:46.584 } 00:12:46.584 } 00:12:46.584 ]' 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:46.584 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.842 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.842 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.843 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.101 22:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.666 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.924 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.487 00:12:48.487 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.487 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.487 22:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.745 { 00:12:48.745 "cntlid": 119, 00:12:48.745 "qid": 0, 00:12:48.745 "state": "enabled", 00:12:48.745 "listen_address": { 00:12:48.745 "trtype": "TCP", 00:12:48.745 "adrfam": "IPv4", 00:12:48.745 "traddr": "10.0.0.2", 00:12:48.745 "trsvcid": "4420" 00:12:48.745 }, 00:12:48.745 "peer_address": { 00:12:48.745 "trtype": "TCP", 00:12:48.745 "adrfam": "IPv4", 00:12:48.745 "traddr": "10.0.0.1", 00:12:48.745 "trsvcid": "49490" 00:12:48.745 }, 00:12:48.745 "auth": { 00:12:48.745 "state": "completed", 00:12:48.745 "digest": "sha512", 00:12:48.745 "dhgroup": "ffdhe3072" 00:12:48.745 } 00:12:48.745 } 00:12:48.745 ]' 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.745 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.003 22:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.936 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.503 00:12:50.503 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.503 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.503 22:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.760 { 00:12:50.760 "cntlid": 121, 00:12:50.760 "qid": 0, 00:12:50.760 "state": "enabled", 00:12:50.760 "listen_address": { 00:12:50.760 "trtype": "TCP", 00:12:50.760 "adrfam": "IPv4", 00:12:50.760 "traddr": "10.0.0.2", 00:12:50.760 "trsvcid": "4420" 00:12:50.760 }, 00:12:50.760 "peer_address": { 00:12:50.760 "trtype": "TCP", 00:12:50.760 "adrfam": "IPv4", 00:12:50.760 "traddr": "10.0.0.1", 00:12:50.760 "trsvcid": "52974" 00:12:50.760 }, 00:12:50.760 "auth": { 00:12:50.760 "state": "completed", 00:12:50.760 "digest": "sha512", 00:12:50.760 "dhgroup": "ffdhe4096" 00:12:50.760 } 00:12:50.760 } 00:12:50.760 ]' 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:50.760 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.017 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.017 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.017 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.274 22:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.838 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.097 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.664 00:12:52.664 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.664 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.664 22:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.922 { 00:12:52.922 "cntlid": 123, 00:12:52.922 "qid": 0, 00:12:52.922 "state": "enabled", 00:12:52.922 "listen_address": { 00:12:52.922 "trtype": "TCP", 00:12:52.922 "adrfam": "IPv4", 00:12:52.922 "traddr": "10.0.0.2", 00:12:52.922 "trsvcid": "4420" 00:12:52.922 }, 00:12:52.922 "peer_address": { 00:12:52.922 "trtype": "TCP", 00:12:52.922 "adrfam": "IPv4", 00:12:52.922 "traddr": "10.0.0.1", 00:12:52.922 "trsvcid": "53002" 00:12:52.922 }, 00:12:52.922 "auth": { 00:12:52.922 "state": "completed", 00:12:52.922 "digest": "sha512", 00:12:52.922 "dhgroup": "ffdhe4096" 00:12:52.922 } 00:12:52.922 } 00:12:52.922 ]' 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.922 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.181 22:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:12:53.749 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:54.009 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:54.272 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.273 22:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.531 00:12:54.531 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.531 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.531 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.790 { 00:12:54.790 "cntlid": 125, 00:12:54.790 "qid": 0, 00:12:54.790 "state": "enabled", 00:12:54.790 "listen_address": { 00:12:54.790 "trtype": "TCP", 00:12:54.790 "adrfam": "IPv4", 00:12:54.790 "traddr": "10.0.0.2", 00:12:54.790 "trsvcid": "4420" 00:12:54.790 }, 00:12:54.790 "peer_address": { 00:12:54.790 "trtype": "TCP", 00:12:54.790 "adrfam": "IPv4", 00:12:54.790 "traddr": "10.0.0.1", 00:12:54.790 "trsvcid": "53036" 00:12:54.790 }, 00:12:54.790 "auth": { 00:12:54.790 "state": "completed", 00:12:54.790 "digest": "sha512", 00:12:54.790 "dhgroup": "ffdhe4096" 00:12:54.790 } 00:12:54.790 } 00:12:54.790 ]' 00:12:54.790 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.048 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.306 22:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:12:55.872 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.872 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:55.872 22:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.872 22:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.131 22:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.131 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.131 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:56.131 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.390 22:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.648 00:12:56.648 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.648 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.648 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.907 { 00:12:56.907 "cntlid": 127, 00:12:56.907 "qid": 0, 00:12:56.907 "state": "enabled", 00:12:56.907 "listen_address": { 00:12:56.907 "trtype": "TCP", 00:12:56.907 "adrfam": "IPv4", 00:12:56.907 "traddr": "10.0.0.2", 00:12:56.907 "trsvcid": "4420" 00:12:56.907 }, 00:12:56.907 "peer_address": { 00:12:56.907 "trtype": "TCP", 00:12:56.907 "adrfam": "IPv4", 00:12:56.907 "traddr": "10.0.0.1", 00:12:56.907 "trsvcid": "53050" 00:12:56.907 }, 00:12:56.907 "auth": { 00:12:56.907 "state": "completed", 00:12:56.907 "digest": "sha512", 00:12:56.907 "dhgroup": "ffdhe4096" 00:12:56.907 } 00:12:56.907 } 00:12:56.907 ]' 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.907 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.166 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.166 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.166 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.166 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.166 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.427 22:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.363 22:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.928 00:12:58.928 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.928 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.928 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.186 { 00:12:59.186 "cntlid": 129, 00:12:59.186 "qid": 0, 00:12:59.186 "state": "enabled", 00:12:59.186 "listen_address": { 00:12:59.186 "trtype": "TCP", 00:12:59.186 "adrfam": "IPv4", 00:12:59.186 "traddr": "10.0.0.2", 00:12:59.186 "trsvcid": "4420" 00:12:59.186 }, 00:12:59.186 "peer_address": { 00:12:59.186 "trtype": "TCP", 00:12:59.186 "adrfam": "IPv4", 00:12:59.186 "traddr": "10.0.0.1", 00:12:59.186 "trsvcid": "53080" 00:12:59.186 }, 00:12:59.186 "auth": { 00:12:59.186 "state": "completed", 00:12:59.186 "digest": "sha512", 00:12:59.186 "dhgroup": "ffdhe6144" 00:12:59.186 } 00:12:59.186 } 00:12:59.186 ]' 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.186 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.446 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:59.446 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.446 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.446 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.446 22:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.705 22:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:00.273 22:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.842 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.411 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.411 { 00:13:01.411 "cntlid": 131, 00:13:01.411 "qid": 0, 00:13:01.411 "state": "enabled", 00:13:01.411 "listen_address": { 00:13:01.411 "trtype": "TCP", 00:13:01.411 "adrfam": "IPv4", 00:13:01.411 "traddr": "10.0.0.2", 00:13:01.411 "trsvcid": "4420" 00:13:01.411 }, 00:13:01.411 "peer_address": { 00:13:01.411 "trtype": "TCP", 00:13:01.411 "adrfam": "IPv4", 00:13:01.411 "traddr": "10.0.0.1", 00:13:01.411 "trsvcid": "53674" 00:13:01.411 }, 00:13:01.411 "auth": { 00:13:01.411 "state": "completed", 00:13:01.411 "digest": "sha512", 00:13:01.411 "dhgroup": "ffdhe6144" 00:13:01.411 } 00:13:01.411 } 00:13:01.411 ]' 00:13:01.411 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.670 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.670 22:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.670 22:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:01.670 22:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.670 22:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.670 22:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.670 22:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.929 22:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:02.865 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.124 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.382 00:13:03.382 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.382 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.382 22:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.640 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.640 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.640 22:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.640 22:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.900 { 00:13:03.900 "cntlid": 133, 00:13:03.900 "qid": 0, 00:13:03.900 "state": "enabled", 00:13:03.900 "listen_address": { 00:13:03.900 "trtype": "TCP", 00:13:03.900 "adrfam": "IPv4", 00:13:03.900 "traddr": "10.0.0.2", 00:13:03.900 "trsvcid": "4420" 00:13:03.900 }, 00:13:03.900 "peer_address": { 00:13:03.900 "trtype": "TCP", 00:13:03.900 "adrfam": "IPv4", 00:13:03.900 "traddr": "10.0.0.1", 00:13:03.900 "trsvcid": "53704" 00:13:03.900 }, 00:13:03.900 "auth": { 00:13:03.900 "state": "completed", 00:13:03.900 "digest": "sha512", 00:13:03.900 "dhgroup": "ffdhe6144" 00:13:03.900 } 00:13:03.900 } 00:13:03.900 ]' 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.900 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.159 22:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.095 22:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.662 00:13:05.662 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.662 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.662 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.921 { 00:13:05.921 "cntlid": 135, 00:13:05.921 "qid": 0, 00:13:05.921 "state": "enabled", 00:13:05.921 "listen_address": { 00:13:05.921 "trtype": "TCP", 00:13:05.921 "adrfam": "IPv4", 00:13:05.921 "traddr": "10.0.0.2", 00:13:05.921 "trsvcid": "4420" 00:13:05.921 }, 00:13:05.921 "peer_address": { 00:13:05.921 "trtype": "TCP", 00:13:05.921 "adrfam": "IPv4", 00:13:05.921 "traddr": "10.0.0.1", 00:13:05.921 "trsvcid": "53734" 00:13:05.921 }, 00:13:05.921 "auth": { 00:13:05.921 "state": "completed", 00:13:05.921 "digest": "sha512", 00:13:05.921 "dhgroup": "ffdhe6144" 00:13:05.921 } 00:13:05.921 } 00:13:05.921 ]' 00:13:05.921 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.180 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.439 22:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:07.006 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.273 22:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.274 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.274 22:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.211 00:13:08.211 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.211 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.212 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.212 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.212 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.212 22:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.212 22:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.469 { 00:13:08.469 "cntlid": 137, 00:13:08.469 "qid": 0, 00:13:08.469 "state": "enabled", 00:13:08.469 "listen_address": { 00:13:08.469 "trtype": "TCP", 00:13:08.469 "adrfam": "IPv4", 00:13:08.469 "traddr": "10.0.0.2", 00:13:08.469 "trsvcid": "4420" 00:13:08.469 }, 00:13:08.469 "peer_address": { 00:13:08.469 "trtype": "TCP", 00:13:08.469 "adrfam": "IPv4", 00:13:08.469 "traddr": "10.0.0.1", 00:13:08.469 "trsvcid": "53754" 00:13:08.469 }, 00:13:08.469 "auth": { 00:13:08.469 "state": "completed", 00:13:08.469 "digest": "sha512", 00:13:08.469 "dhgroup": "ffdhe8192" 00:13:08.469 } 00:13:08.469 } 00:13:08.469 ]' 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.469 22:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.727 22:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:09.661 22:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.919 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.491 00:13:10.491 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.491 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.491 22:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.749 { 00:13:10.749 "cntlid": 139, 00:13:10.749 "qid": 0, 00:13:10.749 "state": "enabled", 00:13:10.749 "listen_address": { 00:13:10.749 "trtype": "TCP", 00:13:10.749 "adrfam": "IPv4", 00:13:10.749 "traddr": "10.0.0.2", 00:13:10.749 "trsvcid": "4420" 00:13:10.749 }, 00:13:10.749 "peer_address": { 00:13:10.749 "trtype": "TCP", 00:13:10.749 "adrfam": "IPv4", 00:13:10.749 "traddr": "10.0.0.1", 00:13:10.749 "trsvcid": "43756" 00:13:10.749 }, 00:13:10.749 "auth": { 00:13:10.749 "state": "completed", 00:13:10.749 "digest": "sha512", 00:13:10.749 "dhgroup": "ffdhe8192" 00:13:10.749 } 00:13:10.749 } 00:13:10.749 ]' 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.749 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.008 22:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:01:Njc4ODI3NGIwMTZiOTQxNDhhNTU3MjRkNjgxOTdjZjTVFioc: --dhchap-ctrl-secret DHHC-1:02:OWU0NmFhYjI5M2I3MzdkM2VjNzc1ZDVmMDU2NTkyYTIzMDkyOGEyYjg4ZjJiMTFk56RMFg==: 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.944 22:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.510 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.768 22:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.026 { 00:13:13.026 "cntlid": 141, 00:13:13.026 "qid": 0, 00:13:13.026 "state": "enabled", 00:13:13.026 "listen_address": { 00:13:13.026 "trtype": "TCP", 00:13:13.026 "adrfam": "IPv4", 00:13:13.026 "traddr": "10.0.0.2", 00:13:13.026 "trsvcid": "4420" 00:13:13.026 }, 00:13:13.026 "peer_address": { 00:13:13.026 "trtype": "TCP", 00:13:13.026 "adrfam": "IPv4", 00:13:13.026 "traddr": "10.0.0.1", 00:13:13.026 "trsvcid": "43792" 00:13:13.026 }, 00:13:13.026 "auth": { 00:13:13.026 "state": "completed", 00:13:13.026 "digest": "sha512", 00:13:13.026 "dhgroup": "ffdhe8192" 00:13:13.026 } 00:13:13.026 } 00:13:13.026 ]' 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.026 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.284 22:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:02:OWQ0N2ViYjNiYTVmZjExNjE5ODRmYWMyNTM5NzRhZWQyMGU1N2YzNmRlYjMzYmEwZBMvdg==: --dhchap-ctrl-secret DHHC-1:01:NjI3OTI3YmEwNTExM2E5ZjhjYTAyNmY2N2E2MmVjYWJCa8VC: 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:13.852 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.452 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:14.452 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.452 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.452 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:14.452 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.453 22:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.020 00:13:15.020 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.020 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.020 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.279 { 00:13:15.279 "cntlid": 143, 00:13:15.279 "qid": 0, 00:13:15.279 "state": "enabled", 00:13:15.279 "listen_address": { 00:13:15.279 "trtype": "TCP", 00:13:15.279 "adrfam": "IPv4", 00:13:15.279 "traddr": "10.0.0.2", 00:13:15.279 "trsvcid": "4420" 00:13:15.279 }, 00:13:15.279 "peer_address": { 00:13:15.279 "trtype": "TCP", 00:13:15.279 "adrfam": "IPv4", 00:13:15.279 "traddr": "10.0.0.1", 00:13:15.279 "trsvcid": "43808" 00:13:15.279 }, 00:13:15.279 "auth": { 00:13:15.279 "state": "completed", 00:13:15.279 "digest": "sha512", 00:13:15.279 "dhgroup": "ffdhe8192" 00:13:15.279 } 00:13:15.279 } 00:13:15.279 ]' 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.279 22:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.606 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:16.185 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.443 22:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.374 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.374 { 00:13:17.374 "cntlid": 145, 00:13:17.374 "qid": 0, 00:13:17.374 "state": "enabled", 00:13:17.374 "listen_address": { 00:13:17.374 "trtype": "TCP", 00:13:17.374 "adrfam": "IPv4", 00:13:17.374 "traddr": "10.0.0.2", 00:13:17.374 "trsvcid": "4420" 00:13:17.374 }, 00:13:17.374 "peer_address": { 00:13:17.374 "trtype": "TCP", 00:13:17.374 "adrfam": "IPv4", 00:13:17.374 "traddr": "10.0.0.1", 00:13:17.374 "trsvcid": "43822" 00:13:17.374 }, 00:13:17.374 "auth": { 00:13:17.374 "state": "completed", 00:13:17.374 "digest": "sha512", 00:13:17.374 "dhgroup": "ffdhe8192" 00:13:17.374 } 00:13:17.374 } 00:13:17.374 ]' 00:13:17.374 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.633 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.633 22:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.633 22:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.633 22:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.633 22:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.633 22:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.633 22:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.890 22:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:00:ZWExMWJkZTE1ZTg3Mjk5OWRhNTQ3NGVjZGYxMzBkYmFhYzE5ZmEwYTZjOTMxYTEzGRCnZw==: --dhchap-ctrl-secret DHHC-1:03:YmMyMzZlOGVjNzMyMGQ2MjcyYjI5NzQzZmY0YjJjNDExYjcwMjdjNjA2M2YzYTdjYzQ0MzFhZmY2ODlhN2Y5MKyg7B4=: 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:18.821 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:19.386 request: 00:13:19.386 { 00:13:19.386 "name": "nvme0", 00:13:19.386 "trtype": "tcp", 00:13:19.386 "traddr": "10.0.0.2", 00:13:19.386 "adrfam": "ipv4", 00:13:19.386 "trsvcid": "4420", 00:13:19.386 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:19.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0", 00:13:19.386 "prchk_reftag": false, 00:13:19.386 "prchk_guard": false, 00:13:19.386 "hdgst": false, 00:13:19.386 "ddgst": false, 00:13:19.386 "dhchap_key": "key2", 00:13:19.386 "method": "bdev_nvme_attach_controller", 00:13:19.386 "req_id": 1 00:13:19.386 } 00:13:19.386 Got JSON-RPC error response 00:13:19.386 response: 00:13:19.386 { 00:13:19.386 "code": -5, 00:13:19.386 "message": "Input/output error" 00:13:19.386 } 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.386 22:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.952 request: 00:13:19.952 { 00:13:19.952 "name": "nvme0", 00:13:19.952 "trtype": "tcp", 00:13:19.952 "traddr": "10.0.0.2", 00:13:19.952 "adrfam": "ipv4", 00:13:19.952 "trsvcid": "4420", 00:13:19.952 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:19.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0", 00:13:19.952 "prchk_reftag": false, 00:13:19.952 "prchk_guard": false, 00:13:19.952 "hdgst": false, 00:13:19.952 "ddgst": false, 00:13:19.952 "dhchap_key": "key1", 00:13:19.952 "dhchap_ctrlr_key": "ckey2", 00:13:19.952 "method": "bdev_nvme_attach_controller", 00:13:19.952 "req_id": 1 00:13:19.952 } 00:13:19.952 Got JSON-RPC error response 00:13:19.952 response: 00:13:19.952 { 00:13:19.952 "code": -5, 00:13:19.952 "message": "Input/output error" 00:13:19.952 } 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key1 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.952 22:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.519 request: 00:13:20.519 { 00:13:20.519 "name": "nvme0", 00:13:20.519 "trtype": "tcp", 00:13:20.519 "traddr": "10.0.0.2", 00:13:20.519 "adrfam": "ipv4", 00:13:20.519 "trsvcid": "4420", 00:13:20.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:20.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0", 00:13:20.519 "prchk_reftag": false, 00:13:20.519 "prchk_guard": false, 00:13:20.519 "hdgst": false, 00:13:20.519 "ddgst": false, 00:13:20.519 "dhchap_key": "key1", 00:13:20.519 "dhchap_ctrlr_key": "ckey1", 00:13:20.519 "method": "bdev_nvme_attach_controller", 00:13:20.519 "req_id": 1 00:13:20.519 } 00:13:20.519 Got JSON-RPC error response 00:13:20.519 response: 00:13:20.519 { 00:13:20.519 "code": -5, 00:13:20.519 "message": "Input/output error" 00:13:20.519 } 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69324 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69324 ']' 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69324 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69324 00:13:20.519 killing process with pid 69324 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69324' 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69324 00:13:20.519 22:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69324 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72386 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72386 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72386 ']' 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.777 22:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72386 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72386 ']' 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.788 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.046 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.046 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:22.046 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:22.046 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.046 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:22.303 22:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:22.869 00:13:22.869 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.869 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.869 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.127 { 00:13:23.127 "cntlid": 1, 00:13:23.127 "qid": 0, 00:13:23.127 "state": "enabled", 00:13:23.127 "listen_address": { 00:13:23.127 "trtype": "TCP", 00:13:23.127 "adrfam": "IPv4", 00:13:23.127 "traddr": "10.0.0.2", 00:13:23.127 "trsvcid": "4420" 00:13:23.127 }, 00:13:23.127 "peer_address": { 00:13:23.127 "trtype": "TCP", 00:13:23.127 "adrfam": "IPv4", 00:13:23.127 "traddr": "10.0.0.1", 00:13:23.127 "trsvcid": "36472" 00:13:23.127 }, 00:13:23.127 "auth": { 00:13:23.127 "state": "completed", 00:13:23.127 "digest": "sha512", 00:13:23.127 "dhgroup": "ffdhe8192" 00:13:23.127 } 00:13:23.127 } 00:13:23.127 ]' 00:13:23.127 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.386 22:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.644 22:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-secret DHHC-1:03:NWE3OTU5Njc4NDcyMTkyMTA5ZDU5MTVlZjI4YzkzM2VlYzIyYzVhYmYwZWZlZTNjM2Y2ZDlkMjYyZGUwZDY2Mt2thJI=: 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --dhchap-key key3 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:24.582 22:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.582 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.841 request: 00:13:24.841 { 00:13:24.841 "name": "nvme0", 00:13:24.841 "trtype": "tcp", 00:13:24.841 "traddr": "10.0.0.2", 00:13:24.841 "adrfam": "ipv4", 00:13:24.841 "trsvcid": "4420", 00:13:24.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:24.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0", 00:13:24.841 "prchk_reftag": false, 00:13:24.841 "prchk_guard": false, 00:13:24.841 "hdgst": false, 00:13:24.841 "ddgst": false, 00:13:24.841 "dhchap_key": "key3", 00:13:24.841 "method": "bdev_nvme_attach_controller", 00:13:24.841 "req_id": 1 00:13:24.841 } 00:13:24.841 Got JSON-RPC error response 00:13:24.841 response: 00:13:24.841 { 00:13:24.841 "code": -5, 00:13:24.841 "message": "Input/output error" 00:13:24.841 } 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:24.841 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.409 request: 00:13:25.409 { 00:13:25.409 "name": "nvme0", 00:13:25.409 "trtype": "tcp", 00:13:25.409 "traddr": "10.0.0.2", 00:13:25.409 "adrfam": "ipv4", 00:13:25.409 "trsvcid": "4420", 00:13:25.409 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0", 00:13:25.409 "prchk_reftag": false, 00:13:25.409 "prchk_guard": false, 00:13:25.409 "hdgst": false, 00:13:25.409 "ddgst": false, 00:13:25.409 "dhchap_key": "key3", 00:13:25.409 "method": "bdev_nvme_attach_controller", 00:13:25.409 "req_id": 1 00:13:25.409 } 00:13:25.409 Got JSON-RPC error response 00:13:25.409 response: 00:13:25.409 { 00:13:25.409 "code": -5, 00:13:25.409 "message": "Input/output error" 00:13:25.409 } 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.409 22:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.668 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.927 request: 00:13:25.927 { 00:13:25.927 "name": "nvme0", 00:13:25.927 "trtype": "tcp", 00:13:25.927 "traddr": "10.0.0.2", 00:13:25.927 "adrfam": "ipv4", 00:13:25.927 "trsvcid": "4420", 00:13:25.927 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0", 00:13:25.927 "prchk_reftag": false, 00:13:25.927 "prchk_guard": false, 00:13:25.927 "hdgst": false, 00:13:25.927 "ddgst": false, 00:13:25.927 "dhchap_key": "key0", 00:13:25.927 "dhchap_ctrlr_key": "key1", 00:13:25.927 "method": "bdev_nvme_attach_controller", 00:13:25.927 "req_id": 1 00:13:25.927 } 00:13:25.927 Got JSON-RPC error response 00:13:25.927 response: 00:13:25.927 { 00:13:25.927 "code": -5, 00:13:25.927 "message": "Input/output error" 00:13:25.927 } 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:25.927 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:26.493 00:13:26.493 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:26.493 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:26.493 22:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.752 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.752 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.752 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69356 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69356 ']' 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69356 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69356 00:13:27.010 killing process with pid 69356 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69356' 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69356 00:13:27.010 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69356 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:27.269 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:27.269 rmmod nvme_tcp 00:13:27.269 rmmod nvme_fabrics 00:13:27.269 rmmod nvme_keyring 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72386 ']' 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72386 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72386 ']' 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72386 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72386 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72386' 00:13:27.528 killing process with pid 72386 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72386 00:13:27.528 22:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72386 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oIO /tmp/spdk.key-sha256.lbG /tmp/spdk.key-sha384.Eo8 /tmp/spdk.key-sha512.E2X /tmp/spdk.key-sha512.3BU /tmp/spdk.key-sha384.Ers /tmp/spdk.key-sha256.0NN '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:27.787 00:13:27.787 real 2m53.010s 00:13:27.787 user 6m54.787s 00:13:27.787 sys 0m27.337s 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.787 ************************************ 00:13:27.787 END TEST nvmf_auth_target 00:13:27.787 ************************************ 00:13:27.787 22:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.787 22:43:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.787 22:43:43 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:27.787 22:43:43 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:27.787 22:43:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:27.787 22:43:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.787 22:43:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.787 ************************************ 00:13:27.787 START TEST nvmf_bdevio_no_huge 00:13:27.787 ************************************ 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:27.787 * Looking for test storage... 00:13:27.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.787 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:27.788 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:28.046 Cannot find device "nvmf_tgt_br" 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.046 Cannot find device "nvmf_tgt_br2" 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:28.046 Cannot find device "nvmf_tgt_br" 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:28.046 Cannot find device "nvmf_tgt_br2" 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:28.046 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:28.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:13:28.305 00:13:28.305 --- 10.0.0.2 ping statistics --- 00:13:28.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.305 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:28.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:28.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:13:28.305 00:13:28.305 --- 10.0.0.3 ping statistics --- 00:13:28.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.305 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:28.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:28.305 00:13:28.305 --- 10.0.0.1 ping statistics --- 00:13:28.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.305 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72708 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72708 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72708 ']' 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.305 22:43:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 [2024-07-15 22:43:43.798182] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:13:28.305 [2024-07-15 22:43:43.798304] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:28.562 [2024-07-15 22:43:43.959935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.562 [2024-07-15 22:43:44.115856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.562 [2024-07-15 22:43:44.116411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.562 [2024-07-15 22:43:44.116904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.562 [2024-07-15 22:43:44.117270] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.562 [2024-07-15 22:43:44.117372] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.562 [2024-07-15 22:43:44.117833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:28.562 [2024-07-15 22:43:44.117956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:28.562 [2024-07-15 22:43:44.118100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:28.562 [2024-07-15 22:43:44.118104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.562 [2024-07-15 22:43:44.123226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.576 [2024-07-15 22:43:44.866792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.576 Malloc0 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.576 [2024-07-15 22:43:44.915085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:29.576 { 00:13:29.576 "params": { 00:13:29.576 "name": "Nvme$subsystem", 00:13:29.576 "trtype": "$TEST_TRANSPORT", 00:13:29.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:29.576 "adrfam": "ipv4", 00:13:29.576 "trsvcid": "$NVMF_PORT", 00:13:29.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:29.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:29.576 "hdgst": ${hdgst:-false}, 00:13:29.576 "ddgst": ${ddgst:-false} 00:13:29.576 }, 00:13:29.576 "method": "bdev_nvme_attach_controller" 00:13:29.576 } 00:13:29.576 EOF 00:13:29.576 )") 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:29.576 22:43:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:29.576 "params": { 00:13:29.576 "name": "Nvme1", 00:13:29.576 "trtype": "tcp", 00:13:29.576 "traddr": "10.0.0.2", 00:13:29.576 "adrfam": "ipv4", 00:13:29.576 "trsvcid": "4420", 00:13:29.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.576 "hdgst": false, 00:13:29.576 "ddgst": false 00:13:29.576 }, 00:13:29.576 "method": "bdev_nvme_attach_controller" 00:13:29.576 }' 00:13:29.576 [2024-07-15 22:43:44.972889] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:13:29.576 [2024-07-15 22:43:44.973011] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72744 ] 00:13:29.576 [2024-07-15 22:43:45.121727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.835 [2024-07-15 22:43:45.268905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.835 [2024-07-15 22:43:45.269064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.835 [2024-07-15 22:43:45.269072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.835 [2024-07-15 22:43:45.283643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.094 I/O targets: 00:13:30.094 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:30.094 00:13:30.094 00:13:30.094 CUnit - A unit testing framework for C - Version 2.1-3 00:13:30.094 http://cunit.sourceforge.net/ 00:13:30.094 00:13:30.094 00:13:30.094 Suite: bdevio tests on: Nvme1n1 00:13:30.094 Test: blockdev write read block ...passed 00:13:30.094 Test: blockdev write zeroes read block ...passed 00:13:30.094 Test: blockdev write zeroes read no split ...passed 00:13:30.094 Test: blockdev write zeroes read split ...passed 00:13:30.094 Test: blockdev write zeroes read split partial ...passed 00:13:30.094 Test: blockdev reset ...[2024-07-15 22:43:45.500317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:30.094 [2024-07-15 22:43:45.500646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7bf310 (9): Bad file descriptor 00:13:30.094 [2024-07-15 22:43:45.521309] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:30.094 passed 00:13:30.094 Test: blockdev write read 8 blocks ...passed 00:13:30.094 Test: blockdev write read size > 128k ...passed 00:13:30.094 Test: blockdev write read invalid size ...passed 00:13:30.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.094 Test: blockdev write read max offset ...passed 00:13:30.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.094 Test: blockdev writev readv 8 blocks ...passed 00:13:30.094 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.094 Test: blockdev writev readv block ...passed 00:13:30.094 Test: blockdev writev readv size > 128k ...passed 00:13:30.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.094 Test: blockdev comparev and writev ...[2024-07-15 22:43:45.531343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.531625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.531640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.532043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.532067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.532088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.532101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.532434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.532455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.532476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.532489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.532833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.532854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.532875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:30.094 [2024-07-15 22:43:45.532888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:30.094 passed 00:13:30.094 Test: blockdev nvme passthru rw ...passed 00:13:30.094 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:43:45.533956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:30.094 [2024-07-15 22:43:45.533993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.534118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:30.094 [2024-07-15 22:43:45.534139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:30.094 passed 00:13:30.094 Test: blockdev nvme admin passthru ...[2024-07-15 22:43:45.534254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:30.094 [2024-07-15 22:43:45.534282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:30.094 [2024-07-15 22:43:45.534404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:30.094 [2024-07-15 22:43:45.534424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:30.094 passed 00:13:30.094 Test: blockdev copy ...passed 00:13:30.094 00:13:30.094 Run Summary: Type Total Ran Passed Failed Inactive 00:13:30.094 suites 1 1 n/a 0 0 00:13:30.094 tests 23 23 23 0 0 00:13:30.094 asserts 152 152 152 0 n/a 00:13:30.094 00:13:30.094 Elapsed time = 0.198 seconds 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.353 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:30.611 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.611 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:30.611 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.611 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.611 rmmod nvme_tcp 00:13:30.611 rmmod nvme_fabrics 00:13:30.611 rmmod nvme_keyring 00:13:30.611 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.611 22:43:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72708 ']' 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72708 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72708 ']' 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72708 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72708 00:13:30.611 killing process with pid 72708 00:13:30.611 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:30.612 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:30.612 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72708' 00:13:30.612 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72708 00:13:30.612 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72708 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:31.179 00:13:31.179 real 0m3.266s 00:13:31.179 user 0m10.429s 00:13:31.179 sys 0m1.347s 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.179 22:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:31.179 ************************************ 00:13:31.179 END TEST nvmf_bdevio_no_huge 00:13:31.179 ************************************ 00:13:31.179 22:43:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:31.179 22:43:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:31.179 22:43:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:31.179 22:43:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.179 22:43:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:31.179 ************************************ 00:13:31.179 START TEST nvmf_tls 00:13:31.179 ************************************ 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:31.179 * Looking for test storage... 00:13:31.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.179 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:31.180 Cannot find device "nvmf_tgt_br" 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.180 Cannot find device "nvmf_tgt_br2" 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:31.180 Cannot find device "nvmf_tgt_br" 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:31.180 Cannot find device "nvmf_tgt_br2" 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:31.180 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:31.438 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:31.438 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:31.439 22:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:31.439 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:31.439 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:31.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:31.697 00:13:31.697 --- 10.0.0.2 ping statistics --- 00:13:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.697 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:31.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:31.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:31.697 00:13:31.697 --- 10.0.0.3 ping statistics --- 00:13:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.697 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:31.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:31.697 00:13:31.697 --- 10.0.0.1 ping statistics --- 00:13:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.697 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72932 00:13:31.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72932 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72932 ']' 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.697 22:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.697 [2024-07-15 22:43:47.096095] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:13:31.697 [2024-07-15 22:43:47.096214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.697 [2024-07-15 22:43:47.239058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.956 [2024-07-15 22:43:47.352157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.956 [2024-07-15 22:43:47.352443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.956 [2024-07-15 22:43:47.352628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.956 [2024-07-15 22:43:47.352868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.956 [2024-07-15 22:43:47.353034] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.956 [2024-07-15 22:43:47.353121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.524 22:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.524 22:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:32.524 22:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.524 22:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.524 22:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.821 22:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.822 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:32.822 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:32.822 true 00:13:32.822 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:32.822 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:33.079 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:33.079 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:33.079 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:33.338 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:33.595 22:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:33.595 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:33.595 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:33.595 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:34.161 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.161 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:34.161 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:34.161 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:34.161 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.161 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:34.418 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:34.418 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:34.418 22:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:34.676 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.676 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:34.934 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:34.934 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:34.934 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:35.192 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:35.192 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:35.450 22:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.GUxXBi5xlU 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.uas8079YLd 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.GUxXBi5xlU 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uas8079YLd 00:13:35.708 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:35.965 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:36.222 [2024-07-15 22:43:51.684809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:36.222 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.GUxXBi5xlU 00:13:36.222 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GUxXBi5xlU 00:13:36.222 22:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:36.788 [2024-07-15 22:43:52.065207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.788 22:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:37.046 22:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:37.046 [2024-07-15 22:43:52.597336] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:37.046 [2024-07-15 22:43:52.597550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.304 22:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:37.304 malloc0 00:13:37.561 22:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:37.561 22:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GUxXBi5xlU 00:13:37.819 [2024-07-15 22:43:53.365043] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:38.076 22:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GUxXBi5xlU 00:13:48.063 Initializing NVMe Controllers 00:13:48.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:48.063 Initialization complete. Launching workers. 00:13:48.063 ======================================================== 00:13:48.063 Latency(us) 00:13:48.063 Device Information : IOPS MiB/s Average min max 00:13:48.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9253.23 36.15 6918.05 1065.08 9056.83 00:13:48.063 ======================================================== 00:13:48.063 Total : 9253.23 36.15 6918.05 1065.08 9056.83 00:13:48.063 00:13:48.063 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GUxXBi5xlU 00:13:48.063 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GUxXBi5xlU' 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73165 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73165 /var/tmp/bdevperf.sock 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73165 ']' 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.064 22:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.321 [2024-07-15 22:44:03.634485] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:13:48.321 [2024-07-15 22:44:03.635095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73165 ] 00:13:48.321 [2024-07-15 22:44:03.776713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.579 [2024-07-15 22:44:03.899848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.579 [2024-07-15 22:44:03.959472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:49.147 22:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.147 22:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:49.147 22:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GUxXBi5xlU 00:13:49.406 [2024-07-15 22:44:04.960236] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.406 [2024-07-15 22:44:04.960675] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:49.664 TLSTESTn1 00:13:49.664 22:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:49.664 Running I/O for 10 seconds... 00:13:59.636 00:13:59.636 Latency(us) 00:13:59.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.636 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:59.636 Verification LBA range: start 0x0 length 0x2000 00:13:59.636 TLSTESTn1 : 10.02 3884.55 15.17 0.00 0.00 32884.93 7685.59 35985.22 00:13:59.636 =================================================================================================================== 00:13:59.636 Total : 3884.55 15.17 0.00 0.00 32884.93 7685.59 35985.22 00:13:59.636 0 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73165 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73165 ']' 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73165 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73165 00:13:59.895 killing process with pid 73165 00:13:59.895 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.895 00:13:59.895 Latency(us) 00:13:59.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.895 =================================================================================================================== 00:13:59.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73165' 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73165 00:13:59.895 [2024-07-15 22:44:15.238381] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:59.895 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73165 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uas8079YLd 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uas8079YLd 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uas8079YLd 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uas8079YLd' 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73299 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73299 /var/tmp/bdevperf.sock 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73299 ']' 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.155 22:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.155 [2024-07-15 22:44:15.540768] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:00.155 [2024-07-15 22:44:15.541205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73299 ] 00:14:00.155 [2024-07-15 22:44:15.682519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.413 [2024-07-15 22:44:15.791043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.413 [2024-07-15 22:44:15.846304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:00.981 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.981 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:00.981 22:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uas8079YLd 00:14:01.239 [2024-07-15 22:44:16.776494] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.239 [2024-07-15 22:44:16.776633] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:01.240 [2024-07-15 22:44:16.783626] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:01.240 [2024-07-15 22:44:16.784131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1040 (107): Transport endpoint is not connected 00:14:01.240 [2024-07-15 22:44:16.785121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba1040 (9): Bad file descriptor 00:14:01.240 [2024-07-15 22:44:16.786118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:01.240 [2024-07-15 22:44:16.786148] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:01.240 [2024-07-15 22:44:16.786159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:01.240 request: 00:14:01.240 { 00:14:01.240 "name": "TLSTEST", 00:14:01.240 "trtype": "tcp", 00:14:01.240 "traddr": "10.0.0.2", 00:14:01.240 "adrfam": "ipv4", 00:14:01.240 "trsvcid": "4420", 00:14:01.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.240 "prchk_reftag": false, 00:14:01.240 "prchk_guard": false, 00:14:01.240 "hdgst": false, 00:14:01.240 "ddgst": false, 00:14:01.240 "psk": "/tmp/tmp.uas8079YLd", 00:14:01.240 "method": "bdev_nvme_attach_controller", 00:14:01.240 "req_id": 1 00:14:01.240 } 00:14:01.240 Got JSON-RPC error response 00:14:01.240 response: 00:14:01.240 { 00:14:01.240 "code": -5, 00:14:01.240 "message": "Input/output error" 00:14:01.240 } 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73299 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73299 ']' 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73299 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73299 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:01.499 killing process with pid 73299 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73299' 00:14:01.499 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.499 00:14:01.499 Latency(us) 00:14:01.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.499 =================================================================================================================== 00:14:01.499 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73299 00:14:01.499 [2024-07-15 22:44:16.833373] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:01.499 22:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73299 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GUxXBi5xlU 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GUxXBi5xlU 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GUxXBi5xlU 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GUxXBi5xlU' 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73326 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73326 /var/tmp/bdevperf.sock 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73326 ']' 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.499 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.756 22:44:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.756 [2024-07-15 22:44:17.116151] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:01.756 [2024-07-15 22:44:17.116499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73326 ] 00:14:01.756 [2024-07-15 22:44:17.254981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.014 [2024-07-15 22:44:17.368072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.014 [2024-07-15 22:44:17.421978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.581 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.581 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:02.581 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.GUxXBi5xlU 00:14:02.840 [2024-07-15 22:44:18.255346] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.840 [2024-07-15 22:44:18.255524] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:02.840 [2024-07-15 22:44:18.263653] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:02.840 [2024-07-15 22:44:18.263693] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:02.840 [2024-07-15 22:44:18.263743] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:02.840 [2024-07-15 22:44:18.264178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e2040 (107): Transport endpoint is not connected 00:14:02.840 [2024-07-15 22:44:18.265169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e2040 (9): Bad file descriptor 00:14:02.840 [2024-07-15 22:44:18.266166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:02.840 [2024-07-15 22:44:18.266190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:02.840 [2024-07-15 22:44:18.266201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:02.840 request: 00:14:02.840 { 00:14:02.840 "name": "TLSTEST", 00:14:02.840 "trtype": "tcp", 00:14:02.840 "traddr": "10.0.0.2", 00:14:02.840 "adrfam": "ipv4", 00:14:02.840 "trsvcid": "4420", 00:14:02.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.840 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:02.840 "prchk_reftag": false, 00:14:02.840 "prchk_guard": false, 00:14:02.840 "hdgst": false, 00:14:02.840 "ddgst": false, 00:14:02.840 "psk": "/tmp/tmp.GUxXBi5xlU", 00:14:02.840 "method": "bdev_nvme_attach_controller", 00:14:02.840 "req_id": 1 00:14:02.840 } 00:14:02.840 Got JSON-RPC error response 00:14:02.840 response: 00:14:02.840 { 00:14:02.840 "code": -5, 00:14:02.840 "message": "Input/output error" 00:14:02.840 } 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73326 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73326 ']' 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73326 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73326 00:14:02.840 killing process with pid 73326 00:14:02.840 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.840 00:14:02.840 Latency(us) 00:14:02.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.840 =================================================================================================================== 00:14:02.840 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73326' 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73326 00:14:02.840 [2024-07-15 22:44:18.325001] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:02.840 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73326 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GUxXBi5xlU 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GUxXBi5xlU 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:03.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GUxXBi5xlU 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GUxXBi5xlU' 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73354 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73354 /var/tmp/bdevperf.sock 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73354 ']' 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.099 22:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.099 [2024-07-15 22:44:18.588433] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:03.099 [2024-07-15 22:44:18.590068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73354 ] 00:14:03.357 [2024-07-15 22:44:18.723463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.357 [2024-07-15 22:44:18.839435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.357 [2024-07-15 22:44:18.895288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:04.292 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.292 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:04.292 22:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GUxXBi5xlU 00:14:04.552 [2024-07-15 22:44:19.879915] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:04.552 [2024-07-15 22:44:19.880088] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:04.552 [2024-07-15 22:44:19.884958] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:04.552 [2024-07-15 22:44:19.885000] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:04.552 [2024-07-15 22:44:19.885060] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:04.552 [2024-07-15 22:44:19.885665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb50040 (107): Transport endpoint is not connected 00:14:04.552 [2024-07-15 22:44:19.886651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb50040 (9): Bad file descriptor 00:14:04.552 [2024-07-15 22:44:19.887647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:04.552 [2024-07-15 22:44:19.887672] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:04.552 [2024-07-15 22:44:19.887683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:04.552 request: 00:14:04.552 { 00:14:04.552 "name": "TLSTEST", 00:14:04.552 "trtype": "tcp", 00:14:04.552 "traddr": "10.0.0.2", 00:14:04.552 "adrfam": "ipv4", 00:14:04.552 "trsvcid": "4420", 00:14:04.552 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:04.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:04.552 "prchk_reftag": false, 00:14:04.552 "prchk_guard": false, 00:14:04.552 "hdgst": false, 00:14:04.552 "ddgst": false, 00:14:04.552 "psk": "/tmp/tmp.GUxXBi5xlU", 00:14:04.552 "method": "bdev_nvme_attach_controller", 00:14:04.552 "req_id": 1 00:14:04.552 } 00:14:04.552 Got JSON-RPC error response 00:14:04.552 response: 00:14:04.552 { 00:14:04.552 "code": -5, 00:14:04.552 "message": "Input/output error" 00:14:04.552 } 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73354 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73354 ']' 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73354 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73354 00:14:04.552 killing process with pid 73354 00:14:04.552 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.552 00:14:04.552 Latency(us) 00:14:04.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.552 =================================================================================================================== 00:14:04.552 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73354' 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73354 00:14:04.552 [2024-07-15 22:44:19.932462] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:04.552 22:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73354 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73376 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73376 /var/tmp/bdevperf.sock 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73376 ']' 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.811 22:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.811 [2024-07-15 22:44:20.215042] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:04.811 [2024-07-15 22:44:20.215423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73376 ] 00:14:04.811 [2024-07-15 22:44:20.357738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.070 [2024-07-15 22:44:20.495549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.070 [2024-07-15 22:44:20.555466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:05.637 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.637 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:05.637 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:05.895 [2024-07-15 22:44:21.404171] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:05.895 [2024-07-15 22:44:21.406409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1266770 (9): Bad file descriptor 00:14:05.895 [2024-07-15 22:44:21.407404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:05.895 [2024-07-15 22:44:21.407432] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:05.895 [2024-07-15 22:44:21.407444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:05.895 request: 00:14:05.895 { 00:14:05.895 "name": "TLSTEST", 00:14:05.895 "trtype": "tcp", 00:14:05.895 "traddr": "10.0.0.2", 00:14:05.895 "adrfam": "ipv4", 00:14:05.895 "trsvcid": "4420", 00:14:05.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.895 "prchk_reftag": false, 00:14:05.895 "prchk_guard": false, 00:14:05.895 "hdgst": false, 00:14:05.895 "ddgst": false, 00:14:05.895 "method": "bdev_nvme_attach_controller", 00:14:05.895 "req_id": 1 00:14:05.895 } 00:14:05.896 Got JSON-RPC error response 00:14:05.896 response: 00:14:05.896 { 00:14:05.896 "code": -5, 00:14:05.896 "message": "Input/output error" 00:14:05.896 } 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73376 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73376 ']' 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73376 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73376 00:14:05.896 killing process with pid 73376 00:14:05.896 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.896 00:14:05.896 Latency(us) 00:14:05.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.896 =================================================================================================================== 00:14:05.896 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73376' 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73376 00:14:05.896 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73376 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72932 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72932 ']' 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72932 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72932 00:14:06.154 killing process with pid 72932 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72932' 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72932 00:14:06.154 [2024-07-15 22:44:21.710294] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:06.154 22:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72932 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:06.412 22:44:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:06.671 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:06.671 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:06.671 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.UUWNrq9s03 00:14:06.671 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:06.671 22:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.UUWNrq9s03 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73420 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73420 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73420 ']' 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.671 22:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.671 [2024-07-15 22:44:22.057126] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:06.671 [2024-07-15 22:44:22.057226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.671 [2024-07-15 22:44:22.189007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.930 [2024-07-15 22:44:22.304316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.930 [2024-07-15 22:44:22.304373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.930 [2024-07-15 22:44:22.304385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.930 [2024-07-15 22:44:22.304393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.930 [2024-07-15 22:44:22.304400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.930 [2024-07-15 22:44:22.304432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.930 [2024-07-15 22:44:22.358975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.UUWNrq9s03 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UUWNrq9s03 00:14:07.497 22:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:07.755 [2024-07-15 22:44:23.317558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.012 22:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:08.270 22:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:08.527 [2024-07-15 22:44:23.881642] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:08.527 [2024-07-15 22:44:23.881880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.527 22:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:08.785 malloc0 00:14:08.785 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:09.043 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:09.302 [2024-07-15 22:44:24.645611] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:09.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UUWNrq9s03 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UUWNrq9s03' 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73475 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73475 /var/tmp/bdevperf.sock 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73475 ']' 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.302 22:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.302 [2024-07-15 22:44:24.709620] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:09.302 [2024-07-15 22:44:24.709917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73475 ] 00:14:09.302 [2024-07-15 22:44:24.848223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.560 [2024-07-15 22:44:24.980194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.560 [2024-07-15 22:44:25.038107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:10.149 22:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.149 22:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:10.149 22:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:10.408 [2024-07-15 22:44:25.921105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:10.408 [2024-07-15 22:44:25.921504] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:10.666 TLSTESTn1 00:14:10.666 22:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:10.666 Running I/O for 10 seconds... 00:14:20.643 00:14:20.643 Latency(us) 00:14:20.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.643 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:20.643 Verification LBA range: start 0x0 length 0x2000 00:14:20.643 TLSTESTn1 : 10.02 3639.15 14.22 0.00 0.00 35102.51 7745.16 31933.91 00:14:20.643 =================================================================================================================== 00:14:20.643 Total : 3639.15 14.22 0.00 0.00 35102.51 7745.16 31933.91 00:14:20.643 0 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73475 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73475 ']' 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73475 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73475 00:14:20.643 killing process with pid 73475 00:14:20.643 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.643 00:14:20.643 Latency(us) 00:14:20.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.643 =================================================================================================================== 00:14:20.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73475' 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73475 00:14:20.643 [2024-07-15 22:44:36.189163] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:20.643 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73475 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.UUWNrq9s03 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UUWNrq9s03 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UUWNrq9s03 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:20.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UUWNrq9s03 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UUWNrq9s03' 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73608 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73608 /var/tmp/bdevperf.sock 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73608 ']' 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.900 22:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.156 [2024-07-15 22:44:36.474917] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:21.156 [2024-07-15 22:44:36.475296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73608 ] 00:14:21.156 [2024-07-15 22:44:36.607692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.413 [2024-07-15 22:44:36.725272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.413 [2024-07-15 22:44:36.778836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.974 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.974 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:21.974 22:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:22.242 [2024-07-15 22:44:37.782366] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.242 [2024-07-15 22:44:37.782760] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:22.242 [2024-07-15 22:44:37.782924] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.UUWNrq9s03 00:14:22.242 request: 00:14:22.242 { 00:14:22.242 "name": "TLSTEST", 00:14:22.242 "trtype": "tcp", 00:14:22.242 "traddr": "10.0.0.2", 00:14:22.242 "adrfam": "ipv4", 00:14:22.242 "trsvcid": "4420", 00:14:22.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.242 "prchk_reftag": false, 00:14:22.242 "prchk_guard": false, 00:14:22.242 "hdgst": false, 00:14:22.242 "ddgst": false, 00:14:22.242 "psk": "/tmp/tmp.UUWNrq9s03", 00:14:22.242 "method": "bdev_nvme_attach_controller", 00:14:22.242 "req_id": 1 00:14:22.242 } 00:14:22.242 Got JSON-RPC error response 00:14:22.242 response: 00:14:22.242 { 00:14:22.242 "code": -1, 00:14:22.242 "message": "Operation not permitted" 00:14:22.242 } 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73608 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73608 ']' 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73608 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73608 00:14:22.516 killing process with pid 73608 00:14:22.516 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.516 00:14:22.516 Latency(us) 00:14:22.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.516 =================================================================================================================== 00:14:22.516 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73608' 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73608 00:14:22.516 22:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73608 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73420 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73420 ']' 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73420 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73420 00:14:22.516 killing process with pid 73420 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73420' 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73420 00:14:22.516 [2024-07-15 22:44:38.079596] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:22.516 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73420 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73642 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73642 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73642 ']' 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.774 22:44:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.032 [2024-07-15 22:44:38.390169] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:23.032 [2024-07-15 22:44:38.390903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.032 [2024-07-15 22:44:38.532823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.290 [2024-07-15 22:44:38.674654] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.290 [2024-07-15 22:44:38.674912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.290 [2024-07-15 22:44:38.675065] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.290 [2024-07-15 22:44:38.675281] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.290 [2024-07-15 22:44:38.675385] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.290 [2024-07-15 22:44:38.675518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.290 [2024-07-15 22:44:38.728845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.UUWNrq9s03 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UUWNrq9s03 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.UUWNrq9s03 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UUWNrq9s03 00:14:23.857 22:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:24.115 [2024-07-15 22:44:39.660851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.115 22:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:24.680 22:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:24.938 [2024-07-15 22:44:40.256962] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:24.938 [2024-07-15 22:44:40.257431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.938 22:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:24.938 malloc0 00:14:25.196 22:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:25.453 22:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:25.453 [2024-07-15 22:44:41.016302] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:25.453 [2024-07-15 22:44:41.016585] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:25.453 [2024-07-15 22:44:41.016750] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:25.711 request: 00:14:25.711 { 00:14:25.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.711 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.711 "psk": "/tmp/tmp.UUWNrq9s03", 00:14:25.711 "method": "nvmf_subsystem_add_host", 00:14:25.711 "req_id": 1 00:14:25.711 } 00:14:25.711 Got JSON-RPC error response 00:14:25.711 response: 00:14:25.711 { 00:14:25.711 "code": -32603, 00:14:25.711 "message": "Internal error" 00:14:25.711 } 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73642 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73642 ']' 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73642 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73642 00:14:25.711 killing process with pid 73642 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73642' 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73642 00:14:25.711 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73642 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.UUWNrq9s03 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73709 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73709 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73709 ']' 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.969 22:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.969 [2024-07-15 22:44:41.348931] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:25.969 [2024-07-15 22:44:41.349025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.969 [2024-07-15 22:44:41.485761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.226 [2024-07-15 22:44:41.604850] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.226 [2024-07-15 22:44:41.605156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.227 [2024-07-15 22:44:41.605383] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.227 [2024-07-15 22:44:41.605586] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.227 [2024-07-15 22:44:41.605713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.227 [2024-07-15 22:44:41.605841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.227 [2024-07-15 22:44:41.658844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.UUWNrq9s03 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UUWNrq9s03 00:14:26.814 22:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:27.072 [2024-07-15 22:44:42.534451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.072 22:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:27.330 22:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:27.588 [2024-07-15 22:44:43.026542] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.588 [2024-07-15 22:44:43.026782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.588 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:27.849 malloc0 00:14:27.849 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:28.107 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:28.365 [2024-07-15 22:44:43.773858] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:28.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73765 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73765 /var/tmp/bdevperf.sock 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73765 ']' 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.365 22:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.366 22:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.366 22:44:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.366 [2024-07-15 22:44:43.844070] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:28.366 [2024-07-15 22:44:43.844475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73765 ] 00:14:28.624 [2024-07-15 22:44:43.980350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.624 [2024-07-15 22:44:44.100913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.624 [2024-07-15 22:44:44.154912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.559 22:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.559 22:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:29.559 22:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:29.559 [2024-07-15 22:44:45.029283] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:29.559 [2024-07-15 22:44:45.029421] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:29.559 TLSTESTn1 00:14:29.559 22:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:30.126 22:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:30.126 "subsystems": [ 00:14:30.126 { 00:14:30.126 "subsystem": "keyring", 00:14:30.126 "config": [] 00:14:30.126 }, 00:14:30.126 { 00:14:30.126 "subsystem": "iobuf", 00:14:30.126 "config": [ 00:14:30.126 { 00:14:30.126 "method": "iobuf_set_options", 00:14:30.126 "params": { 00:14:30.126 "small_pool_count": 8192, 00:14:30.126 "large_pool_count": 1024, 00:14:30.126 "small_bufsize": 8192, 00:14:30.126 "large_bufsize": 135168 00:14:30.126 } 00:14:30.126 } 00:14:30.126 ] 00:14:30.126 }, 00:14:30.127 { 00:14:30.127 "subsystem": "sock", 00:14:30.127 "config": [ 00:14:30.127 { 00:14:30.127 "method": "sock_set_default_impl", 00:14:30.127 "params": { 00:14:30.127 "impl_name": "uring" 00:14:30.127 } 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "method": "sock_impl_set_options", 00:14:30.127 "params": { 00:14:30.127 "impl_name": "ssl", 00:14:30.127 "recv_buf_size": 4096, 00:14:30.127 "send_buf_size": 4096, 00:14:30.127 "enable_recv_pipe": true, 00:14:30.127 "enable_quickack": false, 00:14:30.127 "enable_placement_id": 0, 00:14:30.127 "enable_zerocopy_send_server": true, 00:14:30.127 "enable_zerocopy_send_client": false, 00:14:30.127 "zerocopy_threshold": 0, 00:14:30.127 "tls_version": 0, 00:14:30.127 "enable_ktls": false 00:14:30.127 } 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "method": "sock_impl_set_options", 00:14:30.127 "params": { 00:14:30.127 "impl_name": "posix", 00:14:30.127 "recv_buf_size": 2097152, 00:14:30.127 "send_buf_size": 2097152, 00:14:30.127 "enable_recv_pipe": true, 00:14:30.127 "enable_quickack": false, 00:14:30.127 "enable_placement_id": 0, 00:14:30.127 "enable_zerocopy_send_server": true, 00:14:30.127 "enable_zerocopy_send_client": false, 00:14:30.127 "zerocopy_threshold": 0, 00:14:30.127 "tls_version": 0, 00:14:30.127 "enable_ktls": false 00:14:30.127 } 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "method": "sock_impl_set_options", 00:14:30.127 "params": { 00:14:30.127 "impl_name": "uring", 00:14:30.127 "recv_buf_size": 2097152, 00:14:30.127 "send_buf_size": 2097152, 00:14:30.127 "enable_recv_pipe": true, 00:14:30.127 "enable_quickack": false, 00:14:30.127 "enable_placement_id": 0, 00:14:30.127 "enable_zerocopy_send_server": false, 00:14:30.127 "enable_zerocopy_send_client": false, 00:14:30.127 "zerocopy_threshold": 0, 00:14:30.127 "tls_version": 0, 00:14:30.127 "enable_ktls": false 00:14:30.127 } 00:14:30.127 } 00:14:30.127 ] 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "subsystem": "vmd", 00:14:30.127 "config": [] 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "subsystem": "accel", 00:14:30.127 "config": [ 00:14:30.127 { 00:14:30.127 "method": "accel_set_options", 00:14:30.127 "params": { 00:14:30.127 "small_cache_size": 128, 00:14:30.127 "large_cache_size": 16, 00:14:30.127 "task_count": 2048, 00:14:30.127 "sequence_count": 2048, 00:14:30.127 "buf_count": 2048 00:14:30.127 } 00:14:30.127 } 00:14:30.127 ] 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "subsystem": "bdev", 00:14:30.127 "config": [ 00:14:30.127 { 00:14:30.127 "method": "bdev_set_options", 00:14:30.127 "params": { 00:14:30.127 "bdev_io_pool_size": 65535, 00:14:30.127 "bdev_io_cache_size": 256, 00:14:30.127 "bdev_auto_examine": true, 00:14:30.127 "iobuf_small_cache_size": 128, 00:14:30.127 "iobuf_large_cache_size": 16 00:14:30.127 } 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "method": "bdev_raid_set_options", 00:14:30.127 "params": { 00:14:30.127 "process_window_size_kb": 1024 00:14:30.127 } 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "method": "bdev_iscsi_set_options", 00:14:30.127 "params": { 00:14:30.127 "timeout_sec": 30 00:14:30.127 } 00:14:30.127 }, 00:14:30.127 { 00:14:30.127 "method": "bdev_nvme_set_options", 00:14:30.127 "params": { 00:14:30.127 "action_on_timeout": "none", 00:14:30.127 "timeout_us": 0, 00:14:30.127 "timeout_admin_us": 0, 00:14:30.127 "keep_alive_timeout_ms": 10000, 00:14:30.127 "arbitration_burst": 0, 00:14:30.127 "low_priority_weight": 0, 00:14:30.127 "medium_priority_weight": 0, 00:14:30.127 "high_priority_weight": 0, 00:14:30.127 "nvme_adminq_poll_period_us": 10000, 00:14:30.127 "nvme_ioq_poll_period_us": 0, 00:14:30.127 "io_queue_requests": 0, 00:14:30.127 "delay_cmd_submit": true, 00:14:30.127 "transport_retry_count": 4, 00:14:30.127 "bdev_retry_count": 3, 00:14:30.127 "transport_ack_timeout": 0, 00:14:30.127 "ctrlr_loss_timeout_sec": 0, 00:14:30.127 "reconnect_delay_sec": 0, 00:14:30.127 "fast_io_fail_timeout_sec": 0, 00:14:30.127 "disable_auto_failback": false, 00:14:30.127 "generate_uuids": false, 00:14:30.127 "transport_tos": 0, 00:14:30.127 "nvme_error_stat": false, 00:14:30.127 "rdma_srq_size": 0, 00:14:30.127 "io_path_stat": false, 00:14:30.127 "allow_accel_sequence": false, 00:14:30.127 "rdma_max_cq_size": 0, 00:14:30.127 "rdma_cm_event_timeout_ms": 0, 00:14:30.127 "dhchap_digests": [ 00:14:30.127 "sha256", 00:14:30.127 "sha384", 00:14:30.127 "sha512" 00:14:30.127 ], 00:14:30.128 "dhchap_dhgroups": [ 00:14:30.128 "null", 00:14:30.128 "ffdhe2048", 00:14:30.128 "ffdhe3072", 00:14:30.128 "ffdhe4096", 00:14:30.128 "ffdhe6144", 00:14:30.128 "ffdhe8192" 00:14:30.128 ] 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "bdev_nvme_set_hotplug", 00:14:30.128 "params": { 00:14:30.128 "period_us": 100000, 00:14:30.128 "enable": false 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "bdev_malloc_create", 00:14:30.128 "params": { 00:14:30.128 "name": "malloc0", 00:14:30.128 "num_blocks": 8192, 00:14:30.128 "block_size": 4096, 00:14:30.128 "physical_block_size": 4096, 00:14:30.128 "uuid": "90e42b4b-82d4-464f-a292-d9334034d0e7", 00:14:30.128 "optimal_io_boundary": 0 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "bdev_wait_for_examine" 00:14:30.128 } 00:14:30.128 ] 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "subsystem": "nbd", 00:14:30.128 "config": [] 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "subsystem": "scheduler", 00:14:30.128 "config": [ 00:14:30.128 { 00:14:30.128 "method": "framework_set_scheduler", 00:14:30.128 "params": { 00:14:30.128 "name": "static" 00:14:30.128 } 00:14:30.128 } 00:14:30.128 ] 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "subsystem": "nvmf", 00:14:30.128 "config": [ 00:14:30.128 { 00:14:30.128 "method": "nvmf_set_config", 00:14:30.128 "params": { 00:14:30.128 "discovery_filter": "match_any", 00:14:30.128 "admin_cmd_passthru": { 00:14:30.128 "identify_ctrlr": false 00:14:30.128 } 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_set_max_subsystems", 00:14:30.128 "params": { 00:14:30.128 "max_subsystems": 1024 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_set_crdt", 00:14:30.128 "params": { 00:14:30.128 "crdt1": 0, 00:14:30.128 "crdt2": 0, 00:14:30.128 "crdt3": 0 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_create_transport", 00:14:30.128 "params": { 00:14:30.128 "trtype": "TCP", 00:14:30.128 "max_queue_depth": 128, 00:14:30.128 "max_io_qpairs_per_ctrlr": 127, 00:14:30.128 "in_capsule_data_size": 4096, 00:14:30.128 "max_io_size": 131072, 00:14:30.128 "io_unit_size": 131072, 00:14:30.128 "max_aq_depth": 128, 00:14:30.128 "num_shared_buffers": 511, 00:14:30.128 "buf_cache_size": 4294967295, 00:14:30.128 "dif_insert_or_strip": false, 00:14:30.128 "zcopy": false, 00:14:30.128 "c2h_success": false, 00:14:30.128 "sock_priority": 0, 00:14:30.128 "abort_timeout_sec": 1, 00:14:30.128 "ack_timeout": 0, 00:14:30.128 "data_wr_pool_size": 0 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_create_subsystem", 00:14:30.128 "params": { 00:14:30.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.128 "allow_any_host": false, 00:14:30.128 "serial_number": "SPDK00000000000001", 00:14:30.128 "model_number": "SPDK bdev Controller", 00:14:30.128 "max_namespaces": 10, 00:14:30.128 "min_cntlid": 1, 00:14:30.128 "max_cntlid": 65519, 00:14:30.128 "ana_reporting": false 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_subsystem_add_host", 00:14:30.128 "params": { 00:14:30.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.128 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.128 "psk": "/tmp/tmp.UUWNrq9s03" 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_subsystem_add_ns", 00:14:30.128 "params": { 00:14:30.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.128 "namespace": { 00:14:30.128 "nsid": 1, 00:14:30.128 "bdev_name": "malloc0", 00:14:30.128 "nguid": "90E42B4B82D4464FA292D9334034D0E7", 00:14:30.128 "uuid": "90e42b4b-82d4-464f-a292-d9334034d0e7", 00:14:30.128 "no_auto_visible": false 00:14:30.128 } 00:14:30.128 } 00:14:30.128 }, 00:14:30.128 { 00:14:30.128 "method": "nvmf_subsystem_add_listener", 00:14:30.128 "params": { 00:14:30.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.128 "listen_address": { 00:14:30.128 "trtype": "TCP", 00:14:30.128 "adrfam": "IPv4", 00:14:30.129 "traddr": "10.0.0.2", 00:14:30.129 "trsvcid": "4420" 00:14:30.129 }, 00:14:30.129 "secure_channel": true 00:14:30.129 } 00:14:30.129 } 00:14:30.129 ] 00:14:30.129 } 00:14:30.129 ] 00:14:30.129 }' 00:14:30.129 22:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:30.388 22:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:30.388 "subsystems": [ 00:14:30.388 { 00:14:30.388 "subsystem": "keyring", 00:14:30.388 "config": [] 00:14:30.388 }, 00:14:30.388 { 00:14:30.388 "subsystem": "iobuf", 00:14:30.388 "config": [ 00:14:30.388 { 00:14:30.388 "method": "iobuf_set_options", 00:14:30.388 "params": { 00:14:30.388 "small_pool_count": 8192, 00:14:30.388 "large_pool_count": 1024, 00:14:30.388 "small_bufsize": 8192, 00:14:30.388 "large_bufsize": 135168 00:14:30.388 } 00:14:30.388 } 00:14:30.388 ] 00:14:30.388 }, 00:14:30.388 { 00:14:30.388 "subsystem": "sock", 00:14:30.388 "config": [ 00:14:30.388 { 00:14:30.388 "method": "sock_set_default_impl", 00:14:30.388 "params": { 00:14:30.388 "impl_name": "uring" 00:14:30.388 } 00:14:30.388 }, 00:14:30.388 { 00:14:30.388 "method": "sock_impl_set_options", 00:14:30.388 "params": { 00:14:30.388 "impl_name": "ssl", 00:14:30.388 "recv_buf_size": 4096, 00:14:30.388 "send_buf_size": 4096, 00:14:30.388 "enable_recv_pipe": true, 00:14:30.388 "enable_quickack": false, 00:14:30.388 "enable_placement_id": 0, 00:14:30.388 "enable_zerocopy_send_server": true, 00:14:30.388 "enable_zerocopy_send_client": false, 00:14:30.388 "zerocopy_threshold": 0, 00:14:30.388 "tls_version": 0, 00:14:30.388 "enable_ktls": false 00:14:30.388 } 00:14:30.388 }, 00:14:30.388 { 00:14:30.388 "method": "sock_impl_set_options", 00:14:30.388 "params": { 00:14:30.388 "impl_name": "posix", 00:14:30.389 "recv_buf_size": 2097152, 00:14:30.389 "send_buf_size": 2097152, 00:14:30.389 "enable_recv_pipe": true, 00:14:30.389 "enable_quickack": false, 00:14:30.389 "enable_placement_id": 0, 00:14:30.389 "enable_zerocopy_send_server": true, 00:14:30.389 "enable_zerocopy_send_client": false, 00:14:30.389 "zerocopy_threshold": 0, 00:14:30.389 "tls_version": 0, 00:14:30.389 "enable_ktls": false 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "sock_impl_set_options", 00:14:30.389 "params": { 00:14:30.389 "impl_name": "uring", 00:14:30.389 "recv_buf_size": 2097152, 00:14:30.389 "send_buf_size": 2097152, 00:14:30.389 "enable_recv_pipe": true, 00:14:30.389 "enable_quickack": false, 00:14:30.389 "enable_placement_id": 0, 00:14:30.389 "enable_zerocopy_send_server": false, 00:14:30.389 "enable_zerocopy_send_client": false, 00:14:30.389 "zerocopy_threshold": 0, 00:14:30.389 "tls_version": 0, 00:14:30.389 "enable_ktls": false 00:14:30.389 } 00:14:30.389 } 00:14:30.389 ] 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "subsystem": "vmd", 00:14:30.389 "config": [] 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "subsystem": "accel", 00:14:30.389 "config": [ 00:14:30.389 { 00:14:30.389 "method": "accel_set_options", 00:14:30.389 "params": { 00:14:30.389 "small_cache_size": 128, 00:14:30.389 "large_cache_size": 16, 00:14:30.389 "task_count": 2048, 00:14:30.389 "sequence_count": 2048, 00:14:30.389 "buf_count": 2048 00:14:30.389 } 00:14:30.389 } 00:14:30.389 ] 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "subsystem": "bdev", 00:14:30.389 "config": [ 00:14:30.389 { 00:14:30.389 "method": "bdev_set_options", 00:14:30.389 "params": { 00:14:30.389 "bdev_io_pool_size": 65535, 00:14:30.389 "bdev_io_cache_size": 256, 00:14:30.389 "bdev_auto_examine": true, 00:14:30.389 "iobuf_small_cache_size": 128, 00:14:30.389 "iobuf_large_cache_size": 16 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "bdev_raid_set_options", 00:14:30.389 "params": { 00:14:30.389 "process_window_size_kb": 1024 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "bdev_iscsi_set_options", 00:14:30.389 "params": { 00:14:30.389 "timeout_sec": 30 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "bdev_nvme_set_options", 00:14:30.389 "params": { 00:14:30.389 "action_on_timeout": "none", 00:14:30.389 "timeout_us": 0, 00:14:30.389 "timeout_admin_us": 0, 00:14:30.389 "keep_alive_timeout_ms": 10000, 00:14:30.389 "arbitration_burst": 0, 00:14:30.389 "low_priority_weight": 0, 00:14:30.389 "medium_priority_weight": 0, 00:14:30.389 "high_priority_weight": 0, 00:14:30.389 "nvme_adminq_poll_period_us": 10000, 00:14:30.389 "nvme_ioq_poll_period_us": 0, 00:14:30.389 "io_queue_requests": 512, 00:14:30.389 "delay_cmd_submit": true, 00:14:30.389 "transport_retry_count": 4, 00:14:30.389 "bdev_retry_count": 3, 00:14:30.389 "transport_ack_timeout": 0, 00:14:30.389 "ctrlr_loss_timeout_sec": 0, 00:14:30.389 "reconnect_delay_sec": 0, 00:14:30.389 "fast_io_fail_timeout_sec": 0, 00:14:30.389 "disable_auto_failback": false, 00:14:30.389 "generate_uuids": false, 00:14:30.389 "transport_tos": 0, 00:14:30.389 "nvme_error_stat": false, 00:14:30.389 "rdma_srq_size": 0, 00:14:30.389 "io_path_stat": false, 00:14:30.389 "allow_accel_sequence": false, 00:14:30.389 "rdma_max_cq_size": 0, 00:14:30.389 "rdma_cm_event_timeout_ms": 0, 00:14:30.389 "dhchap_digests": [ 00:14:30.389 "sha256", 00:14:30.389 "sha384", 00:14:30.389 "sha512" 00:14:30.389 ], 00:14:30.389 "dhchap_dhgroups": [ 00:14:30.389 "null", 00:14:30.389 "ffdhe2048", 00:14:30.389 "ffdhe3072", 00:14:30.389 "ffdhe4096", 00:14:30.389 "ffdhe6144", 00:14:30.389 "ffdhe8192" 00:14:30.389 ] 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "bdev_nvme_attach_controller", 00:14:30.389 "params": { 00:14:30.389 "name": "TLSTEST", 00:14:30.389 "trtype": "TCP", 00:14:30.389 "adrfam": "IPv4", 00:14:30.389 "traddr": "10.0.0.2", 00:14:30.389 "trsvcid": "4420", 00:14:30.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.389 "prchk_reftag": false, 00:14:30.389 "prchk_guard": false, 00:14:30.389 "ctrlr_loss_timeout_sec": 0, 00:14:30.389 "reconnect_delay_sec": 0, 00:14:30.389 "fast_io_fail_timeout_sec": 0, 00:14:30.389 "psk": "/tmp/tmp.UUWNrq9s03", 00:14:30.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:30.389 "hdgst": false, 00:14:30.389 "ddgst": false 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "bdev_nvme_set_hotplug", 00:14:30.389 "params": { 00:14:30.389 "period_us": 100000, 00:14:30.389 "enable": false 00:14:30.389 } 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "method": "bdev_wait_for_examine" 00:14:30.389 } 00:14:30.389 ] 00:14:30.389 }, 00:14:30.389 { 00:14:30.389 "subsystem": "nbd", 00:14:30.389 "config": [] 00:14:30.389 } 00:14:30.389 ] 00:14:30.389 }' 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73765 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73765 ']' 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73765 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73765 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:30.389 killing process with pid 73765 00:14:30.389 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.389 00:14:30.389 Latency(us) 00:14:30.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.389 =================================================================================================================== 00:14:30.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73765' 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73765 00:14:30.389 [2024-07-15 22:44:45.752871] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:30.389 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73765 00:14:30.648 22:44:45 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73709 00:14:30.648 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73709 ']' 00:14:30.648 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73709 00:14:30.648 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:30.648 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.648 22:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73709 00:14:30.648 killing process with pid 73709 00:14:30.648 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.648 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.648 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73709' 00:14:30.648 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73709 00:14:30.648 [2024-07-15 22:44:46.008997] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:30.648 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73709 00:14:30.907 22:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:30.907 22:44:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.907 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.907 22:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:30.907 "subsystems": [ 00:14:30.907 { 00:14:30.907 "subsystem": "keyring", 00:14:30.907 "config": [] 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "subsystem": "iobuf", 00:14:30.907 "config": [ 00:14:30.907 { 00:14:30.907 "method": "iobuf_set_options", 00:14:30.907 "params": { 00:14:30.907 "small_pool_count": 8192, 00:14:30.907 "large_pool_count": 1024, 00:14:30.907 "small_bufsize": 8192, 00:14:30.907 "large_bufsize": 135168 00:14:30.907 } 00:14:30.907 } 00:14:30.907 ] 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "subsystem": "sock", 00:14:30.907 "config": [ 00:14:30.907 { 00:14:30.907 "method": "sock_set_default_impl", 00:14:30.907 "params": { 00:14:30.907 "impl_name": "uring" 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "sock_impl_set_options", 00:14:30.907 "params": { 00:14:30.907 "impl_name": "ssl", 00:14:30.907 "recv_buf_size": 4096, 00:14:30.907 "send_buf_size": 4096, 00:14:30.907 "enable_recv_pipe": true, 00:14:30.907 "enable_quickack": false, 00:14:30.907 "enable_placement_id": 0, 00:14:30.907 "enable_zerocopy_send_server": true, 00:14:30.907 "enable_zerocopy_send_client": false, 00:14:30.907 "zerocopy_threshold": 0, 00:14:30.907 "tls_version": 0, 00:14:30.907 "enable_ktls": false 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "sock_impl_set_options", 00:14:30.907 "params": { 00:14:30.907 "impl_name": "posix", 00:14:30.907 "recv_buf_size": 2097152, 00:14:30.907 "send_buf_size": 2097152, 00:14:30.907 "enable_recv_pipe": true, 00:14:30.907 "enable_quickack": false, 00:14:30.907 "enable_placement_id": 0, 00:14:30.907 "enable_zerocopy_send_server": true, 00:14:30.907 "enable_zerocopy_send_client": false, 00:14:30.907 "zerocopy_threshold": 0, 00:14:30.907 "tls_version": 0, 00:14:30.907 "enable_ktls": false 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "sock_impl_set_options", 00:14:30.907 "params": { 00:14:30.907 "impl_name": "uring", 00:14:30.907 "recv_buf_size": 2097152, 00:14:30.907 "send_buf_size": 2097152, 00:14:30.907 "enable_recv_pipe": true, 00:14:30.907 "enable_quickack": false, 00:14:30.907 "enable_placement_id": 0, 00:14:30.907 "enable_zerocopy_send_server": false, 00:14:30.907 "enable_zerocopy_send_client": false, 00:14:30.907 "zerocopy_threshold": 0, 00:14:30.907 "tls_version": 0, 00:14:30.907 "enable_ktls": false 00:14:30.907 } 00:14:30.907 } 00:14:30.907 ] 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "subsystem": "vmd", 00:14:30.907 "config": [] 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "subsystem": "accel", 00:14:30.907 "config": [ 00:14:30.907 { 00:14:30.907 "method": "accel_set_options", 00:14:30.907 "params": { 00:14:30.907 "small_cache_size": 128, 00:14:30.907 "large_cache_size": 16, 00:14:30.907 "task_count": 2048, 00:14:30.907 "sequence_count": 2048, 00:14:30.907 "buf_count": 2048 00:14:30.907 } 00:14:30.907 } 00:14:30.907 ] 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "subsystem": "bdev", 00:14:30.907 "config": [ 00:14:30.907 { 00:14:30.907 "method": "bdev_set_options", 00:14:30.907 "params": { 00:14:30.907 "bdev_io_pool_size": 65535, 00:14:30.907 "bdev_io_cache_size": 256, 00:14:30.907 "bdev_auto_examine": true, 00:14:30.907 "iobuf_small_cache_size": 128, 00:14:30.907 "iobuf_large_cache_size": 16 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "bdev_raid_set_options", 00:14:30.907 "params": { 00:14:30.907 "process_window_size_kb": 1024 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "bdev_iscsi_set_options", 00:14:30.907 "params": { 00:14:30.907 "timeout_sec": 30 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "bdev_nvme_set_options", 00:14:30.907 "params": { 00:14:30.907 "action_on_timeout": "none", 00:14:30.907 "timeout_us": 0, 00:14:30.907 "timeout_admin_us": 0, 00:14:30.907 "keep_alive_timeout_ms": 10000, 00:14:30.907 "arbitration_burst": 0, 00:14:30.907 "low_priority_weight": 0, 00:14:30.907 "medium_priority_weight": 0, 00:14:30.907 "high_priority_weight": 0, 00:14:30.907 "nvme_adminq_poll_period_us": 10000, 00:14:30.907 "nvme_ioq_poll_period_us": 0, 00:14:30.907 "io_queue_requests": 0, 00:14:30.907 "delay_cmd_submit": true, 00:14:30.907 "transport_retry_count": 4, 00:14:30.907 "bdev_retry_count": 3, 00:14:30.907 "transport_ack_timeout": 0, 00:14:30.907 "ctrlr_loss_timeout_sec": 0, 00:14:30.907 "reconnect_delay_sec": 0, 00:14:30.907 "fast_io_fail_timeout_sec": 0, 00:14:30.907 "disable_auto_failback": false, 00:14:30.907 "generate_uuids": false, 00:14:30.907 "transport_tos": 0, 00:14:30.907 "nvme_error_stat": false, 00:14:30.907 "rdma_srq_size": 0, 00:14:30.907 "io_path_stat": false, 00:14:30.907 "allow_accel_sequence": false, 00:14:30.907 "rdma_max_cq_size": 0, 00:14:30.907 "rdma_cm_event_timeout_ms": 0, 00:14:30.907 "dhchap_digests": [ 00:14:30.907 "sha256", 00:14:30.907 "sha384", 00:14:30.907 "sha512" 00:14:30.907 ], 00:14:30.907 "dhchap_dhgroups": [ 00:14:30.907 "null", 00:14:30.907 "ffdhe2048", 00:14:30.907 "ffdhe3072", 00:14:30.907 "ffdhe4096", 00:14:30.907 "ffdhe6144", 00:14:30.907 "ffdhe8192" 00:14:30.907 ] 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "bdev_nvme_set_hotplug", 00:14:30.907 "params": { 00:14:30.907 "period_us": 100000, 00:14:30.907 "enable": false 00:14:30.907 } 00:14:30.907 }, 00:14:30.907 { 00:14:30.907 "method": "bdev_malloc_create", 00:14:30.907 "params": { 00:14:30.908 "name": "malloc0", 00:14:30.908 "num_blocks": 8192, 00:14:30.908 "block_size": 4096, 00:14:30.908 "physical_block_size": 4096, 00:14:30.908 "uuid": "90e42b4b-82d4-464f-a292-d9334034d0e7", 00:14:30.908 "optimal_io_boundary": 0 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "bdev_wait_for_examine" 00:14:30.908 } 00:14:30.908 ] 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "subsystem": "nbd", 00:14:30.908 "config": [] 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "subsystem": "scheduler", 00:14:30.908 "config": [ 00:14:30.908 { 00:14:30.908 "method": "framework_set_scheduler", 00:14:30.908 "params": { 00:14:30.908 "name": "static" 00:14:30.908 } 00:14:30.908 } 00:14:30.908 ] 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "subsystem": "nvmf", 00:14:30.908 "config": [ 00:14:30.908 { 00:14:30.908 "method": "nvmf_set_config", 00:14:30.908 "params": { 00:14:30.908 "discovery_filter": "match_any", 00:14:30.908 "admin_cmd_passthru": { 00:14:30.908 "identify_ctrlr": false 00:14:30.908 } 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_set_max_subsystems", 00:14:30.908 "params": { 00:14:30.908 "max_subsystems": 1024 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_set_crdt", 00:14:30.908 "params": { 00:14:30.908 "crdt1": 0, 00:14:30.908 "crdt2": 0, 00:14:30.908 "crdt3": 0 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_create_transport", 00:14:30.908 "params": { 00:14:30.908 "trtype": "TCP", 00:14:30.908 "max_queue_depth": 128, 00:14:30.908 "max_io_qpairs_per_ctrlr": 127, 00:14:30.908 "in_capsule_data_size": 4096, 00:14:30.908 "max_io_size": 131072, 00:14:30.908 "io_unit_size": 131072, 00:14:30.908 "max_aq_depth": 128, 00:14:30.908 "num_shared_buffers": 511, 00:14:30.908 "buf_cache_size": 4294967295, 00:14:30.908 "dif_insert_or_strip": false, 00:14:30.908 "zcopy": false, 00:14:30.908 "c2h_success": false, 00:14:30.908 "sock_priority": 0, 00:14:30.908 "abort_timeout_sec": 1, 00:14:30.908 "ack_timeout": 0, 00:14:30.908 "data_wr_pool_size": 0 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_create_subsystem", 00:14:30.908 "params": { 00:14:30.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.908 "allow_any_host": false, 00:14:30.908 "serial_number": "SPDK00000000000001", 00:14:30.908 "model_number": "SPDK bdev Controller", 00:14:30.908 "max_namespaces": 10, 00:14:30.908 "min_cntlid": 1, 00:14:30.908 "max_cntlid": 65519, 00:14:30.908 "ana_reporting": false 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_subsystem_add_host", 00:14:30.908 "params": { 00:14:30.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.908 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.908 "psk": "/tmp/tmp.UUWNrq9s03" 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_subsystem_add_ns", 00:14:30.908 "params": { 00:14:30.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.908 "namespace": { 00:14:30.908 "nsid": 1, 00:14:30.908 "bdev_name": "malloc0", 00:14:30.908 "nguid": "90E42B4B82D4464FA292D9334034D0E7", 00:14:30.908 "uuid": "90e42b4b-82d4-464f-a292-d9334034d0e7", 00:14:30.908 "no_auto_visible": false 00:14:30.908 } 00:14:30.908 } 00:14:30.908 }, 00:14:30.908 { 00:14:30.908 "method": "nvmf_subsystem_add_listener", 00:14:30.908 "params": { 00:14:30.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.908 "listen_address": { 00:14:30.908 "trtype": "TCP", 00:14:30.908 "adrfam": "IPv4", 00:14:30.908 "traddr": "10.0.0.2", 00:14:30.908 "trsvcid": "4420" 00:14:30.908 }, 00:14:30.908 "secure_channel": true 00:14:30.908 } 00:14:30.908 } 00:14:30.908 ] 00:14:30.908 } 00:14:30.908 ] 00:14:30.908 }' 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73808 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73808 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73808 ']' 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.908 22:44:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.908 [2024-07-15 22:44:46.306345] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:30.908 [2024-07-15 22:44:46.306443] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.908 [2024-07-15 22:44:46.442209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.166 [2024-07-15 22:44:46.563153] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.166 [2024-07-15 22:44:46.563226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.166 [2024-07-15 22:44:46.563238] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.166 [2024-07-15 22:44:46.563257] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.166 [2024-07-15 22:44:46.563264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.166 [2024-07-15 22:44:46.563352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.166 [2024-07-15 22:44:46.731116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:31.424 [2024-07-15 22:44:46.802215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.424 [2024-07-15 22:44:46.818122] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:31.424 [2024-07-15 22:44:46.834145] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.424 [2024-07-15 22:44:46.834361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73840 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73840 /var/tmp/bdevperf.sock 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73840 ']' 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:31.992 22:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:31.992 "subsystems": [ 00:14:31.992 { 00:14:31.992 "subsystem": "keyring", 00:14:31.992 "config": [] 00:14:31.992 }, 00:14:31.992 { 00:14:31.992 "subsystem": "iobuf", 00:14:31.992 "config": [ 00:14:31.992 { 00:14:31.992 "method": "iobuf_set_options", 00:14:31.992 "params": { 00:14:31.992 "small_pool_count": 8192, 00:14:31.992 "large_pool_count": 1024, 00:14:31.992 "small_bufsize": 8192, 00:14:31.992 "large_bufsize": 135168 00:14:31.992 } 00:14:31.992 } 00:14:31.992 ] 00:14:31.992 }, 00:14:31.992 { 00:14:31.992 "subsystem": "sock", 00:14:31.992 "config": [ 00:14:31.992 { 00:14:31.992 "method": "sock_set_default_impl", 00:14:31.992 "params": { 00:14:31.992 "impl_name": "uring" 00:14:31.992 } 00:14:31.992 }, 00:14:31.992 { 00:14:31.992 "method": "sock_impl_set_options", 00:14:31.992 "params": { 00:14:31.992 "impl_name": "ssl", 00:14:31.992 "recv_buf_size": 4096, 00:14:31.992 "send_buf_size": 4096, 00:14:31.992 "enable_recv_pipe": true, 00:14:31.992 "enable_quickack": false, 00:14:31.992 "enable_placement_id": 0, 00:14:31.992 "enable_zerocopy_send_server": true, 00:14:31.992 "enable_zerocopy_send_client": false, 00:14:31.992 "zerocopy_threshold": 0, 00:14:31.992 "tls_version": 0, 00:14:31.992 "enable_ktls": false 00:14:31.992 } 00:14:31.992 }, 00:14:31.992 { 00:14:31.992 "method": "sock_impl_set_options", 00:14:31.992 "params": { 00:14:31.993 "impl_name": "posix", 00:14:31.993 "recv_buf_size": 2097152, 00:14:31.993 "send_buf_size": 2097152, 00:14:31.993 "enable_recv_pipe": true, 00:14:31.993 "enable_quickack": false, 00:14:31.993 "enable_placement_id": 0, 00:14:31.993 "enable_zerocopy_send_server": true, 00:14:31.993 "enable_zerocopy_send_client": false, 00:14:31.993 "zerocopy_threshold": 0, 00:14:31.993 "tls_version": 0, 00:14:31.993 "enable_ktls": false 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "sock_impl_set_options", 00:14:31.993 "params": { 00:14:31.993 "impl_name": "uring", 00:14:31.993 "recv_buf_size": 2097152, 00:14:31.993 "send_buf_size": 2097152, 00:14:31.993 "enable_recv_pipe": true, 00:14:31.993 "enable_quickack": false, 00:14:31.993 "enable_placement_id": 0, 00:14:31.993 "enable_zerocopy_send_server": false, 00:14:31.993 "enable_zerocopy_send_client": false, 00:14:31.993 "zerocopy_threshold": 0, 00:14:31.993 "tls_version": 0, 00:14:31.993 "enable_ktls": false 00:14:31.993 } 00:14:31.993 } 00:14:31.993 ] 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "subsystem": "vmd", 00:14:31.993 "config": [] 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "subsystem": "accel", 00:14:31.993 "config": [ 00:14:31.993 { 00:14:31.993 "method": "accel_set_options", 00:14:31.993 "params": { 00:14:31.993 "small_cache_size": 128, 00:14:31.993 "large_cache_size": 16, 00:14:31.993 "task_count": 2048, 00:14:31.993 "sequence_count": 2048, 00:14:31.993 "buf_count": 2048 00:14:31.993 } 00:14:31.993 } 00:14:31.993 ] 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "subsystem": "bdev", 00:14:31.993 "config": [ 00:14:31.993 { 00:14:31.993 "method": "bdev_set_options", 00:14:31.993 "params": { 00:14:31.993 "bdev_io_pool_size": 65535, 00:14:31.993 "bdev_io_cache_size": 256, 00:14:31.993 "bdev_auto_examine": true, 00:14:31.993 "iobuf_small_cache_size": 128, 00:14:31.993 "iobuf_large_cache_size": 16 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "bdev_raid_set_options", 00:14:31.993 "params": { 00:14:31.993 "process_window_size_kb": 1024 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "bdev_iscsi_set_options", 00:14:31.993 "params": { 00:14:31.993 "timeout_sec": 30 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "bdev_nvme_set_options", 00:14:31.993 "params": { 00:14:31.993 "action_on_timeout": "none", 00:14:31.993 "timeout_us": 0, 00:14:31.993 "timeout_admin_us": 0, 00:14:31.993 "keep_alive_timeout_ms": 10000, 00:14:31.993 "arbitration_burst": 0, 00:14:31.993 "low_priority_weight": 0, 00:14:31.993 "medium_priority_weight": 0, 00:14:31.993 "high_priority_weight": 0, 00:14:31.993 "nvme_adminq_poll_period_us": 10000, 00:14:31.993 "nvme_ioq_poll_period_us": 0, 00:14:31.993 "io_queue_requests": 512, 00:14:31.993 "delay_cmd_submit": true, 00:14:31.993 "transport_retry_count": 4, 00:14:31.993 "bdev_retry_count": 3, 00:14:31.993 "transport_ack_timeout": 0, 00:14:31.993 "ctrlr_loss_timeout_sec": 0, 00:14:31.993 "reconnect_delay_sec": 0, 00:14:31.993 "fast_io_fail_timeout_sec": 0, 00:14:31.993 "disable_auto_failback": false, 00:14:31.993 "generate_uuids": false, 00:14:31.993 "transport_tos": 0, 00:14:31.993 "nvme_error_stat": false, 00:14:31.993 "rdma_srq_size": 0, 00:14:31.993 "io_path_stat": false, 00:14:31.993 "allow_accel_sequence": false, 00:14:31.993 "rdma_max_cq_size": 0, 00:14:31.993 "rdma_cm_event_timeout_ms": 0, 00:14:31.993 "dhchap_digests": [ 00:14:31.993 "sha256", 00:14:31.993 "sha384", 00:14:31.993 "sha512" 00:14:31.993 ], 00:14:31.993 "dhchap_dhgroups": [ 00:14:31.993 "null", 00:14:31.993 "ffdhe2048", 00:14:31.993 "ffdhe3072", 00:14:31.993 "ffdhe4096", 00:14:31.993 "ffdhe6144", 00:14:31.993 "ffdhe8192" 00:14:31.993 ] 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "bdev_nvme_attach_controller", 00:14:31.993 "params": { 00:14:31.993 "name": "TLSTEST", 00:14:31.993 "trtype": "TCP", 00:14:31.993 "adrfam": "IPv4", 00:14:31.993 "traddr": "10.0.0.2", 00:14:31.993 "trsvcid": "4420", 00:14:31.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.993 "prchk_reftag": false, 00:14:31.993 "prchk_guard": false, 00:14:31.993 "ctrlr_loss_timeout_sec": 0, 00:14:31.993 "reconnect_delay_sec": 0, 00:14:31.993 "fast_io_fail_timeout_sec": 0, 00:14:31.993 "psk": "/tmp/tmp.UUWNrq9s03", 00:14:31.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.993 "hdgst": false, 00:14:31.993 "ddgst": false 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "bdev_nvme_set_hotplug", 00:14:31.993 "params": { 00:14:31.993 "period_us": 100000, 00:14:31.993 "enable": false 00:14:31.993 } 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "method": "bdev_wait_for_examine" 00:14:31.993 } 00:14:31.993 ] 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "subsystem": "nbd", 00:14:31.993 "config": [] 00:14:31.993 } 00:14:31.993 ] 00:14:31.993 }' 00:14:31.993 [2024-07-15 22:44:47.340346] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:31.993 [2024-07-15 22:44:47.340439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73840 ] 00:14:31.993 [2024-07-15 22:44:47.472267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.252 [2024-07-15 22:44:47.590873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.252 [2024-07-15 22:44:47.726899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.252 [2024-07-15 22:44:47.767840] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.252 [2024-07-15 22:44:47.768252] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:32.820 22:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.820 22:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:32.820 22:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:33.078 Running I/O for 10 seconds... 00:14:43.052 00:14:43.052 Latency(us) 00:14:43.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:43.052 Verification LBA range: start 0x0 length 0x2000 00:14:43.052 TLSTESTn1 : 10.02 3782.71 14.78 0.00 0.00 33766.31 1511.80 31457.28 00:14:43.052 =================================================================================================================== 00:14:43.052 Total : 3782.71 14.78 0.00 0.00 33766.31 1511.80 31457.28 00:14:43.052 0 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73840 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73840 ']' 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73840 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73840 00:14:43.052 killing process with pid 73840 00:14:43.052 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.052 00:14:43.052 Latency(us) 00:14:43.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.052 =================================================================================================================== 00:14:43.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73840' 00:14:43.052 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73840 00:14:43.052 [2024-07-15 22:44:58.580944] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:43.053 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73840 00:14:43.310 22:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73808 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73808 ']' 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73808 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73808 00:14:43.311 killing process with pid 73808 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73808' 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73808 00:14:43.311 [2024-07-15 22:44:58.834779] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:43.311 22:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73808 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73980 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73980 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73980 ']' 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.569 22:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.827 [2024-07-15 22:44:59.146380] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:43.827 [2024-07-15 22:44:59.146478] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.827 [2024-07-15 22:44:59.279975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.086 [2024-07-15 22:44:59.404039] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.086 [2024-07-15 22:44:59.404102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.086 [2024-07-15 22:44:59.404115] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.086 [2024-07-15 22:44:59.404123] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.086 [2024-07-15 22:44:59.404130] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.086 [2024-07-15 22:44:59.404155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.086 [2024-07-15 22:44:59.458117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.UUWNrq9s03 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UUWNrq9s03 00:14:44.668 22:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:44.927 [2024-07-15 22:45:00.362840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.927 22:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:45.185 22:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:45.444 [2024-07-15 22:45:00.906952] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.444 [2024-07-15 22:45:00.907213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.444 22:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:45.703 malloc0 00:14:45.703 22:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.962 22:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UUWNrq9s03 00:14:46.221 [2024-07-15 22:45:01.599862] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74029 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74029 /var/tmp/bdevperf.sock 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74029 ']' 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.221 22:45:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.221 [2024-07-15 22:45:01.672124] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:46.221 [2024-07-15 22:45:01.672240] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:14:46.508 [2024-07-15 22:45:01.810370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.508 [2024-07-15 22:45:01.927541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.508 [2024-07-15 22:45:01.980876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:47.444 22:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.444 22:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:47.444 22:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UUWNrq9s03 00:14:47.444 22:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:47.703 [2024-07-15 22:45:03.223949] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.961 nvme0n1 00:14:47.961 22:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.961 Running I/O for 1 seconds... 00:14:49.338 00:14:49.338 Latency(us) 00:14:49.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.338 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.338 Verification LBA range: start 0x0 length 0x2000 00:14:49.338 nvme0n1 : 1.02 3895.91 15.22 0.00 0.00 32484.66 7119.59 19899.11 00:14:49.338 =================================================================================================================== 00:14:49.338 Total : 3895.91 15.22 0.00 0.00 32484.66 7119.59 19899.11 00:14:49.338 0 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74029 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74029 ']' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74029 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74029 00:14:49.338 killing process with pid 74029 00:14:49.338 Received shutdown signal, test time was about 1.000000 seconds 00:14:49.338 00:14:49.338 Latency(us) 00:14:49.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.338 =================================================================================================================== 00:14:49.338 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74029' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74029 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74029 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73980 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73980 ']' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73980 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73980 00:14:49.338 killing process with pid 73980 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:49.338 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73980' 00:14:49.339 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73980 00:14:49.339 [2024-07-15 22:45:04.760555] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:49.339 22:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73980 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74080 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74080 00:14:49.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74080 ']' 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.597 22:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.597 [2024-07-15 22:45:05.055790] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:49.597 [2024-07-15 22:45:05.056091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.856 [2024-07-15 22:45:05.188332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.856 [2024-07-15 22:45:05.306106] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.856 [2024-07-15 22:45:05.306356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.856 [2024-07-15 22:45:05.306526] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.856 [2024-07-15 22:45:05.306586] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.856 [2024-07-15 22:45:05.306597] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.856 [2024-07-15 22:45:05.306625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.856 [2024-07-15 22:45:05.359876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.798 [2024-07-15 22:45:06.163120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.798 malloc0 00:14:50.798 [2024-07-15 22:45:06.194408] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:50.798 [2024-07-15 22:45:06.194649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74118 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74118 /var/tmp/bdevperf.sock 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74118 ']' 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.798 22:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.798 [2024-07-15 22:45:06.277494] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:50.798 [2024-07-15 22:45:06.277871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74118 ] 00:14:51.057 [2024-07-15 22:45:06.418515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.058 [2024-07-15 22:45:06.550209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.058 [2024-07-15 22:45:06.607627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:51.991 22:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.991 22:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:51.991 22:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UUWNrq9s03 00:14:51.991 22:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:52.249 [2024-07-15 22:45:07.661740] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.249 nvme0n1 00:14:52.249 22:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:52.507 Running I/O for 1 seconds... 00:14:53.442 00:14:53.442 Latency(us) 00:14:53.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.442 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:53.442 Verification LBA range: start 0x0 length 0x2000 00:14:53.442 nvme0n1 : 1.03 3791.73 14.81 0.00 0.00 33163.32 9175.04 27167.65 00:14:53.442 =================================================================================================================== 00:14:53.442 Total : 3791.73 14.81 0.00 0.00 33163.32 9175.04 27167.65 00:14:53.442 0 00:14:53.442 22:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:53.442 22:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.442 22:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.701 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.701 22:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:53.701 "subsystems": [ 00:14:53.701 { 00:14:53.701 "subsystem": "keyring", 00:14:53.701 "config": [ 00:14:53.701 { 00:14:53.701 "method": "keyring_file_add_key", 00:14:53.701 "params": { 00:14:53.701 "name": "key0", 00:14:53.701 "path": "/tmp/tmp.UUWNrq9s03" 00:14:53.701 } 00:14:53.701 } 00:14:53.701 ] 00:14:53.701 }, 00:14:53.701 { 00:14:53.701 "subsystem": "iobuf", 00:14:53.701 "config": [ 00:14:53.701 { 00:14:53.702 "method": "iobuf_set_options", 00:14:53.702 "params": { 00:14:53.702 "small_pool_count": 8192, 00:14:53.702 "large_pool_count": 1024, 00:14:53.702 "small_bufsize": 8192, 00:14:53.702 "large_bufsize": 135168 00:14:53.702 } 00:14:53.702 } 00:14:53.702 ] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "sock", 00:14:53.702 "config": [ 00:14:53.702 { 00:14:53.702 "method": "sock_set_default_impl", 00:14:53.702 "params": { 00:14:53.702 "impl_name": "uring" 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "sock_impl_set_options", 00:14:53.702 "params": { 00:14:53.702 "impl_name": "ssl", 00:14:53.702 "recv_buf_size": 4096, 00:14:53.702 "send_buf_size": 4096, 00:14:53.702 "enable_recv_pipe": true, 00:14:53.702 "enable_quickack": false, 00:14:53.702 "enable_placement_id": 0, 00:14:53.702 "enable_zerocopy_send_server": true, 00:14:53.702 "enable_zerocopy_send_client": false, 00:14:53.702 "zerocopy_threshold": 0, 00:14:53.702 "tls_version": 0, 00:14:53.702 "enable_ktls": false 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "sock_impl_set_options", 00:14:53.702 "params": { 00:14:53.702 "impl_name": "posix", 00:14:53.702 "recv_buf_size": 2097152, 00:14:53.702 "send_buf_size": 2097152, 00:14:53.702 "enable_recv_pipe": true, 00:14:53.702 "enable_quickack": false, 00:14:53.702 "enable_placement_id": 0, 00:14:53.702 "enable_zerocopy_send_server": true, 00:14:53.702 "enable_zerocopy_send_client": false, 00:14:53.702 "zerocopy_threshold": 0, 00:14:53.702 "tls_version": 0, 00:14:53.702 "enable_ktls": false 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "sock_impl_set_options", 00:14:53.702 "params": { 00:14:53.702 "impl_name": "uring", 00:14:53.702 "recv_buf_size": 2097152, 00:14:53.702 "send_buf_size": 2097152, 00:14:53.702 "enable_recv_pipe": true, 00:14:53.702 "enable_quickack": false, 00:14:53.702 "enable_placement_id": 0, 00:14:53.702 "enable_zerocopy_send_server": false, 00:14:53.702 "enable_zerocopy_send_client": false, 00:14:53.702 "zerocopy_threshold": 0, 00:14:53.702 "tls_version": 0, 00:14:53.702 "enable_ktls": false 00:14:53.702 } 00:14:53.702 } 00:14:53.702 ] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "vmd", 00:14:53.702 "config": [] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "accel", 00:14:53.702 "config": [ 00:14:53.702 { 00:14:53.702 "method": "accel_set_options", 00:14:53.702 "params": { 00:14:53.702 "small_cache_size": 128, 00:14:53.702 "large_cache_size": 16, 00:14:53.702 "task_count": 2048, 00:14:53.702 "sequence_count": 2048, 00:14:53.702 "buf_count": 2048 00:14:53.702 } 00:14:53.702 } 00:14:53.702 ] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "bdev", 00:14:53.702 "config": [ 00:14:53.702 { 00:14:53.702 "method": "bdev_set_options", 00:14:53.702 "params": { 00:14:53.702 "bdev_io_pool_size": 65535, 00:14:53.702 "bdev_io_cache_size": 256, 00:14:53.702 "bdev_auto_examine": true, 00:14:53.702 "iobuf_small_cache_size": 128, 00:14:53.702 "iobuf_large_cache_size": 16 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "bdev_raid_set_options", 00:14:53.702 "params": { 00:14:53.702 "process_window_size_kb": 1024 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "bdev_iscsi_set_options", 00:14:53.702 "params": { 00:14:53.702 "timeout_sec": 30 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "bdev_nvme_set_options", 00:14:53.702 "params": { 00:14:53.702 "action_on_timeout": "none", 00:14:53.702 "timeout_us": 0, 00:14:53.702 "timeout_admin_us": 0, 00:14:53.702 "keep_alive_timeout_ms": 10000, 00:14:53.702 "arbitration_burst": 0, 00:14:53.702 "low_priority_weight": 0, 00:14:53.702 "medium_priority_weight": 0, 00:14:53.702 "high_priority_weight": 0, 00:14:53.702 "nvme_adminq_poll_period_us": 10000, 00:14:53.702 "nvme_ioq_poll_period_us": 0, 00:14:53.702 "io_queue_requests": 0, 00:14:53.702 "delay_cmd_submit": true, 00:14:53.702 "transport_retry_count": 4, 00:14:53.702 "bdev_retry_count": 3, 00:14:53.702 "transport_ack_timeout": 0, 00:14:53.702 "ctrlr_loss_timeout_sec": 0, 00:14:53.702 "reconnect_delay_sec": 0, 00:14:53.702 "fast_io_fail_timeout_sec": 0, 00:14:53.702 "disable_auto_failback": false, 00:14:53.702 "generate_uuids": false, 00:14:53.702 "transport_tos": 0, 00:14:53.702 "nvme_error_stat": false, 00:14:53.702 "rdma_srq_size": 0, 00:14:53.702 "io_path_stat": false, 00:14:53.702 "allow_accel_sequence": false, 00:14:53.702 "rdma_max_cq_size": 0, 00:14:53.702 "rdma_cm_event_timeout_ms": 0, 00:14:53.702 "dhchap_digests": [ 00:14:53.702 "sha256", 00:14:53.702 "sha384", 00:14:53.702 "sha512" 00:14:53.702 ], 00:14:53.702 "dhchap_dhgroups": [ 00:14:53.702 "null", 00:14:53.702 "ffdhe2048", 00:14:53.702 "ffdhe3072", 00:14:53.702 "ffdhe4096", 00:14:53.702 "ffdhe6144", 00:14:53.702 "ffdhe8192" 00:14:53.702 ] 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "bdev_nvme_set_hotplug", 00:14:53.702 "params": { 00:14:53.702 "period_us": 100000, 00:14:53.702 "enable": false 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "bdev_malloc_create", 00:14:53.702 "params": { 00:14:53.702 "name": "malloc0", 00:14:53.702 "num_blocks": 8192, 00:14:53.702 "block_size": 4096, 00:14:53.702 "physical_block_size": 4096, 00:14:53.702 "uuid": "a245bfc3-4460-4211-924d-cfdbce9f98c9", 00:14:53.702 "optimal_io_boundary": 0 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "bdev_wait_for_examine" 00:14:53.702 } 00:14:53.702 ] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "nbd", 00:14:53.702 "config": [] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "scheduler", 00:14:53.702 "config": [ 00:14:53.702 { 00:14:53.702 "method": "framework_set_scheduler", 00:14:53.702 "params": { 00:14:53.702 "name": "static" 00:14:53.702 } 00:14:53.702 } 00:14:53.702 ] 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "subsystem": "nvmf", 00:14:53.702 "config": [ 00:14:53.702 { 00:14:53.702 "method": "nvmf_set_config", 00:14:53.702 "params": { 00:14:53.702 "discovery_filter": "match_any", 00:14:53.702 "admin_cmd_passthru": { 00:14:53.702 "identify_ctrlr": false 00:14:53.702 } 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_set_max_subsystems", 00:14:53.702 "params": { 00:14:53.702 "max_subsystems": 1024 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_set_crdt", 00:14:53.702 "params": { 00:14:53.702 "crdt1": 0, 00:14:53.702 "crdt2": 0, 00:14:53.702 "crdt3": 0 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_create_transport", 00:14:53.702 "params": { 00:14:53.702 "trtype": "TCP", 00:14:53.702 "max_queue_depth": 128, 00:14:53.702 "max_io_qpairs_per_ctrlr": 127, 00:14:53.702 "in_capsule_data_size": 4096, 00:14:53.702 "max_io_size": 131072, 00:14:53.702 "io_unit_size": 131072, 00:14:53.702 "max_aq_depth": 128, 00:14:53.702 "num_shared_buffers": 511, 00:14:53.702 "buf_cache_size": 4294967295, 00:14:53.702 "dif_insert_or_strip": false, 00:14:53.702 "zcopy": false, 00:14:53.702 "c2h_success": false, 00:14:53.702 "sock_priority": 0, 00:14:53.702 "abort_timeout_sec": 1, 00:14:53.702 "ack_timeout": 0, 00:14:53.702 "data_wr_pool_size": 0 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_create_subsystem", 00:14:53.702 "params": { 00:14:53.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.702 "allow_any_host": false, 00:14:53.702 "serial_number": "00000000000000000000", 00:14:53.702 "model_number": "SPDK bdev Controller", 00:14:53.702 "max_namespaces": 32, 00:14:53.702 "min_cntlid": 1, 00:14:53.702 "max_cntlid": 65519, 00:14:53.702 "ana_reporting": false 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_subsystem_add_host", 00:14:53.702 "params": { 00:14:53.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.702 "host": "nqn.2016-06.io.spdk:host1", 00:14:53.702 "psk": "key0" 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_subsystem_add_ns", 00:14:53.702 "params": { 00:14:53.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.702 "namespace": { 00:14:53.702 "nsid": 1, 00:14:53.702 "bdev_name": "malloc0", 00:14:53.702 "nguid": "A245BFC344604211924DCFDBCE9F98C9", 00:14:53.702 "uuid": "a245bfc3-4460-4211-924d-cfdbce9f98c9", 00:14:53.702 "no_auto_visible": false 00:14:53.702 } 00:14:53.702 } 00:14:53.702 }, 00:14:53.702 { 00:14:53.702 "method": "nvmf_subsystem_add_listener", 00:14:53.702 "params": { 00:14:53.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.702 "listen_address": { 00:14:53.703 "trtype": "TCP", 00:14:53.703 "adrfam": "IPv4", 00:14:53.703 "traddr": "10.0.0.2", 00:14:53.703 "trsvcid": "4420" 00:14:53.703 }, 00:14:53.703 "secure_channel": true 00:14:53.703 } 00:14:53.703 } 00:14:53.703 ] 00:14:53.703 } 00:14:53.703 ] 00:14:53.703 }' 00:14:53.703 22:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:53.985 22:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:53.985 "subsystems": [ 00:14:53.985 { 00:14:53.985 "subsystem": "keyring", 00:14:53.985 "config": [ 00:14:53.985 { 00:14:53.986 "method": "keyring_file_add_key", 00:14:53.986 "params": { 00:14:53.986 "name": "key0", 00:14:53.986 "path": "/tmp/tmp.UUWNrq9s03" 00:14:53.986 } 00:14:53.986 } 00:14:53.986 ] 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "subsystem": "iobuf", 00:14:53.986 "config": [ 00:14:53.986 { 00:14:53.986 "method": "iobuf_set_options", 00:14:53.986 "params": { 00:14:53.986 "small_pool_count": 8192, 00:14:53.986 "large_pool_count": 1024, 00:14:53.986 "small_bufsize": 8192, 00:14:53.986 "large_bufsize": 135168 00:14:53.986 } 00:14:53.986 } 00:14:53.986 ] 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "subsystem": "sock", 00:14:53.986 "config": [ 00:14:53.986 { 00:14:53.986 "method": "sock_set_default_impl", 00:14:53.986 "params": { 00:14:53.986 "impl_name": "uring" 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "sock_impl_set_options", 00:14:53.986 "params": { 00:14:53.986 "impl_name": "ssl", 00:14:53.986 "recv_buf_size": 4096, 00:14:53.986 "send_buf_size": 4096, 00:14:53.986 "enable_recv_pipe": true, 00:14:53.986 "enable_quickack": false, 00:14:53.986 "enable_placement_id": 0, 00:14:53.986 "enable_zerocopy_send_server": true, 00:14:53.986 "enable_zerocopy_send_client": false, 00:14:53.986 "zerocopy_threshold": 0, 00:14:53.986 "tls_version": 0, 00:14:53.986 "enable_ktls": false 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "sock_impl_set_options", 00:14:53.986 "params": { 00:14:53.986 "impl_name": "posix", 00:14:53.986 "recv_buf_size": 2097152, 00:14:53.986 "send_buf_size": 2097152, 00:14:53.986 "enable_recv_pipe": true, 00:14:53.986 "enable_quickack": false, 00:14:53.986 "enable_placement_id": 0, 00:14:53.986 "enable_zerocopy_send_server": true, 00:14:53.986 "enable_zerocopy_send_client": false, 00:14:53.986 "zerocopy_threshold": 0, 00:14:53.986 "tls_version": 0, 00:14:53.986 "enable_ktls": false 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "sock_impl_set_options", 00:14:53.986 "params": { 00:14:53.986 "impl_name": "uring", 00:14:53.986 "recv_buf_size": 2097152, 00:14:53.986 "send_buf_size": 2097152, 00:14:53.986 "enable_recv_pipe": true, 00:14:53.986 "enable_quickack": false, 00:14:53.986 "enable_placement_id": 0, 00:14:53.986 "enable_zerocopy_send_server": false, 00:14:53.986 "enable_zerocopy_send_client": false, 00:14:53.986 "zerocopy_threshold": 0, 00:14:53.986 "tls_version": 0, 00:14:53.986 "enable_ktls": false 00:14:53.986 } 00:14:53.986 } 00:14:53.986 ] 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "subsystem": "vmd", 00:14:53.986 "config": [] 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "subsystem": "accel", 00:14:53.986 "config": [ 00:14:53.986 { 00:14:53.986 "method": "accel_set_options", 00:14:53.986 "params": { 00:14:53.986 "small_cache_size": 128, 00:14:53.986 "large_cache_size": 16, 00:14:53.986 "task_count": 2048, 00:14:53.986 "sequence_count": 2048, 00:14:53.986 "buf_count": 2048 00:14:53.986 } 00:14:53.986 } 00:14:53.986 ] 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "subsystem": "bdev", 00:14:53.986 "config": [ 00:14:53.986 { 00:14:53.986 "method": "bdev_set_options", 00:14:53.986 "params": { 00:14:53.986 "bdev_io_pool_size": 65535, 00:14:53.986 "bdev_io_cache_size": 256, 00:14:53.986 "bdev_auto_examine": true, 00:14:53.986 "iobuf_small_cache_size": 128, 00:14:53.986 "iobuf_large_cache_size": 16 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "bdev_raid_set_options", 00:14:53.986 "params": { 00:14:53.986 "process_window_size_kb": 1024 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "bdev_iscsi_set_options", 00:14:53.986 "params": { 00:14:53.986 "timeout_sec": 30 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "bdev_nvme_set_options", 00:14:53.986 "params": { 00:14:53.986 "action_on_timeout": "none", 00:14:53.986 "timeout_us": 0, 00:14:53.986 "timeout_admin_us": 0, 00:14:53.986 "keep_alive_timeout_ms": 10000, 00:14:53.986 "arbitration_burst": 0, 00:14:53.986 "low_priority_weight": 0, 00:14:53.986 "medium_priority_weight": 0, 00:14:53.986 "high_priority_weight": 0, 00:14:53.986 "nvme_adminq_poll_period_us": 10000, 00:14:53.986 "nvme_ioq_poll_period_us": 0, 00:14:53.986 "io_queue_requests": 512, 00:14:53.986 "delay_cmd_submit": true, 00:14:53.986 "transport_retry_count": 4, 00:14:53.986 "bdev_retry_count": 3, 00:14:53.986 "transport_ack_timeout": 0, 00:14:53.986 "ctrlr_loss_timeout_sec": 0, 00:14:53.986 "reconnect_delay_sec": 0, 00:14:53.986 "fast_io_fail_timeout_sec": 0, 00:14:53.986 "disable_auto_failback": false, 00:14:53.986 "generate_uuids": false, 00:14:53.986 "transport_tos": 0, 00:14:53.986 "nvme_error_stat": false, 00:14:53.986 "rdma_srq_size": 0, 00:14:53.986 "io_path_stat": false, 00:14:53.986 "allow_accel_sequence": false, 00:14:53.986 "rdma_max_cq_size": 0, 00:14:53.986 "rdma_cm_event_timeout_ms": 0, 00:14:53.986 "dhchap_digests": [ 00:14:53.986 "sha256", 00:14:53.986 "sha384", 00:14:53.986 "sha512" 00:14:53.986 ], 00:14:53.986 "dhchap_dhgroups": [ 00:14:53.986 "null", 00:14:53.986 "ffdhe2048", 00:14:53.986 "ffdhe3072", 00:14:53.986 "ffdhe4096", 00:14:53.986 "ffdhe6144", 00:14:53.986 "ffdhe8192" 00:14:53.986 ] 00:14:53.986 } 00:14:53.986 }, 00:14:53.986 { 00:14:53.986 "method": "bdev_nvme_attach_controller", 00:14:53.986 "params": { 00:14:53.986 "name": "nvme0", 00:14:53.986 "trtype": "TCP", 00:14:53.986 "adrfam": "IPv4", 00:14:53.986 "traddr": "10.0.0.2", 00:14:53.986 "trsvcid": "4420", 00:14:53.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.986 "prchk_reftag": false, 00:14:53.986 "prchk_guard": false, 00:14:53.986 "ctrlr_loss_timeout_sec": 0, 00:14:53.986 "reconnect_delay_sec": 0, 00:14:53.986 "fast_io_fail_timeout_sec": 0, 00:14:53.986 "psk": "key0", 00:14:53.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.986 "hdgst": false, 00:14:53.986 "ddgst": false 00:14:53.986 } 00:14:53.987 }, 00:14:53.987 { 00:14:53.987 "method": "bdev_nvme_set_hotplug", 00:14:53.987 "params": { 00:14:53.987 "period_us": 100000, 00:14:53.987 "enable": false 00:14:53.987 } 00:14:53.987 }, 00:14:53.987 { 00:14:53.987 "method": "bdev_enable_histogram", 00:14:53.987 "params": { 00:14:53.987 "name": "nvme0n1", 00:14:53.987 "enable": true 00:14:53.987 } 00:14:53.987 }, 00:14:53.987 { 00:14:53.987 "method": "bdev_wait_for_examine" 00:14:53.987 } 00:14:53.987 ] 00:14:53.987 }, 00:14:53.987 { 00:14:53.987 "subsystem": "nbd", 00:14:53.987 "config": [] 00:14:53.987 } 00:14:53.987 ] 00:14:53.987 }' 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74118 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74118 ']' 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74118 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74118 00:14:53.987 killing process with pid 74118 00:14:53.987 Received shutdown signal, test time was about 1.000000 seconds 00:14:53.987 00:14:53.987 Latency(us) 00:14:53.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.987 =================================================================================================================== 00:14:53.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74118' 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74118 00:14:53.987 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74118 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74080 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74080 ']' 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74080 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74080 00:14:54.245 killing process with pid 74080 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74080' 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74080 00:14:54.245 22:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74080 00:14:54.504 22:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:54.504 22:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.504 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.504 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.504 22:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:54.504 "subsystems": [ 00:14:54.504 { 00:14:54.504 "subsystem": "keyring", 00:14:54.504 "config": [ 00:14:54.504 { 00:14:54.504 "method": "keyring_file_add_key", 00:14:54.504 "params": { 00:14:54.504 "name": "key0", 00:14:54.504 "path": "/tmp/tmp.UUWNrq9s03" 00:14:54.504 } 00:14:54.504 } 00:14:54.504 ] 00:14:54.504 }, 00:14:54.504 { 00:14:54.504 "subsystem": "iobuf", 00:14:54.504 "config": [ 00:14:54.504 { 00:14:54.504 "method": "iobuf_set_options", 00:14:54.504 "params": { 00:14:54.504 "small_pool_count": 8192, 00:14:54.504 "large_pool_count": 1024, 00:14:54.504 "small_bufsize": 8192, 00:14:54.504 "large_bufsize": 135168 00:14:54.504 } 00:14:54.504 } 00:14:54.504 ] 00:14:54.504 }, 00:14:54.504 { 00:14:54.504 "subsystem": "sock", 00:14:54.504 "config": [ 00:14:54.504 { 00:14:54.504 "method": "sock_set_default_impl", 00:14:54.504 "params": { 00:14:54.504 "impl_name": "uring" 00:14:54.504 } 00:14:54.504 }, 00:14:54.504 { 00:14:54.504 "method": "sock_impl_set_options", 00:14:54.504 "params": { 00:14:54.504 "impl_name": "ssl", 00:14:54.504 "recv_buf_size": 4096, 00:14:54.504 "send_buf_size": 4096, 00:14:54.504 "enable_recv_pipe": true, 00:14:54.504 "enable_quickack": false, 00:14:54.504 "enable_placement_id": 0, 00:14:54.504 "enable_zerocopy_send_server": true, 00:14:54.504 "enable_zerocopy_send_client": false, 00:14:54.504 "zerocopy_threshold": 0, 00:14:54.504 "tls_version": 0, 00:14:54.504 "enable_ktls": false 00:14:54.504 } 00:14:54.504 }, 00:14:54.504 { 00:14:54.504 "method": "sock_impl_set_options", 00:14:54.504 "params": { 00:14:54.504 "impl_name": "posix", 00:14:54.504 "recv_buf_size": 2097152, 00:14:54.504 "send_buf_size": 2097152, 00:14:54.504 "enable_recv_pipe": true, 00:14:54.504 "enable_quickack": false, 00:14:54.504 "enable_placement_id": 0, 00:14:54.504 "enable_zerocopy_send_server": true, 00:14:54.504 "enable_zerocopy_send_client": false, 00:14:54.504 "zerocopy_threshold": 0, 00:14:54.504 "tls_version": 0, 00:14:54.504 "enable_ktls": false 00:14:54.504 } 00:14:54.504 }, 00:14:54.504 { 00:14:54.504 "method": "sock_impl_set_options", 00:14:54.504 "params": { 00:14:54.504 "impl_name": "uring", 00:14:54.504 "recv_buf_size": 2097152, 00:14:54.504 "send_buf_size": 2097152, 00:14:54.504 "enable_recv_pipe": true, 00:14:54.505 "enable_quickack": false, 00:14:54.505 "enable_placement_id": 0, 00:14:54.505 "enable_zerocopy_send_server": false, 00:14:54.505 "enable_zerocopy_send_client": false, 00:14:54.505 "zerocopy_threshold": 0, 00:14:54.505 "tls_version": 0, 00:14:54.505 "enable_ktls": false 00:14:54.505 } 00:14:54.505 } 00:14:54.505 ] 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "subsystem": "vmd", 00:14:54.505 "config": [] 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "subsystem": "accel", 00:14:54.505 "config": [ 00:14:54.505 { 00:14:54.505 "method": "accel_set_options", 00:14:54.505 "params": { 00:14:54.505 "small_cache_size": 128, 00:14:54.505 "large_cache_size": 16, 00:14:54.505 "task_count": 2048, 00:14:54.505 "sequence_count": 2048, 00:14:54.505 "buf_count": 2048 00:14:54.505 } 00:14:54.505 } 00:14:54.505 ] 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "subsystem": "bdev", 00:14:54.505 "config": [ 00:14:54.505 { 00:14:54.505 "method": "bdev_set_options", 00:14:54.505 "params": { 00:14:54.505 "bdev_io_pool_size": 65535, 00:14:54.505 "bdev_io_cache_size": 256, 00:14:54.505 "bdev_auto_examine": true, 00:14:54.505 "iobuf_small_cache_size": 128, 00:14:54.505 "iobuf_large_cache_size": 16 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "bdev_raid_set_options", 00:14:54.505 "params": { 00:14:54.505 "process_window_size_kb": 1024 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "bdev_iscsi_set_options", 00:14:54.505 "params": { 00:14:54.505 "timeout_sec": 30 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "bdev_nvme_set_options", 00:14:54.505 "params": { 00:14:54.505 "action_on_timeout": "none", 00:14:54.505 "timeout_us": 0, 00:14:54.505 "timeout_admin_us": 0, 00:14:54.505 "keep_alive_timeout_ms": 10000, 00:14:54.505 "arbitration_burst": 0, 00:14:54.505 "low_priority_weight": 0, 00:14:54.505 "medium_priority_weight": 0, 00:14:54.505 "high_priority_weight": 0, 00:14:54.505 "nvme_adminq_poll_period_us": 10000, 00:14:54.505 "nvme_ioq_poll_period_us": 0, 00:14:54.505 "io_queue_requests": 0, 00:14:54.505 "delay_cmd_submit": true, 00:14:54.505 "transport_retry_count": 4, 00:14:54.505 "bdev_retry_count": 3, 00:14:54.505 "transport_ack_timeout": 0, 00:14:54.505 "ctrlr_loss_timeout_sec": 0, 00:14:54.505 "reconnect_delay_sec": 0, 00:14:54.505 "fast_io_fail_timeout_sec": 0, 00:14:54.505 "disable_auto_failback": false, 00:14:54.505 "generate_uuids": false, 00:14:54.505 "transport_tos": 0, 00:14:54.505 "nvme_error_stat": false, 00:14:54.505 "rdma_srq_size": 0, 00:14:54.505 "io_path_stat": false, 00:14:54.505 "allow_accel_sequence": false, 00:14:54.505 "rdma_max_cq_size": 0, 00:14:54.505 "rdma_cm_event_timeout_ms": 0, 00:14:54.505 "dhchap_digests": [ 00:14:54.505 "sha256", 00:14:54.505 "sha384", 00:14:54.505 "sha512" 00:14:54.505 ], 00:14:54.505 "dhchap_dhgroups": [ 00:14:54.505 "null", 00:14:54.505 "ffdhe2048", 00:14:54.505 "ffdhe3072", 00:14:54.505 "ffdhe4096", 00:14:54.505 "ffdhe6144", 00:14:54.505 "ffdhe8192" 00:14:54.505 ] 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "bdev_nvme_set_hotplug", 00:14:54.505 "params": { 00:14:54.505 "period_us": 100000, 00:14:54.505 "enable": false 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "bdev_malloc_create", 00:14:54.505 "params": { 00:14:54.505 "name": "malloc0", 00:14:54.505 "num_blocks": 8192, 00:14:54.505 "block_size": 4096, 00:14:54.505 "physical_block_size": 4096, 00:14:54.505 "uuid": "a245bfc3-4460-4211-924d-cfdbce9f98c9", 00:14:54.505 "optimal_io_boundary": 0 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "bdev_wait_for_examine" 00:14:54.505 } 00:14:54.505 ] 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "subsystem": "nbd", 00:14:54.505 "config": [] 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "subsystem": "scheduler", 00:14:54.505 "config": [ 00:14:54.505 { 00:14:54.505 "method": "framework_set_scheduler", 00:14:54.505 "params": { 00:14:54.505 "name": "static" 00:14:54.505 } 00:14:54.505 } 00:14:54.505 ] 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "subsystem": "nvmf", 00:14:54.505 "config": [ 00:14:54.505 { 00:14:54.505 "method": "nvmf_set_config", 00:14:54.505 "params": { 00:14:54.505 "discovery_filter": "match_any", 00:14:54.505 "admin_cmd_passthru": { 00:14:54.505 "identify_ctrlr": false 00:14:54.505 } 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_set_max_subsystems", 00:14:54.505 "params": { 00:14:54.505 "max_subsystems": 1024 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_set_crdt", 00:14:54.505 "params": { 00:14:54.505 "crdt1": 0, 00:14:54.505 "crdt2": 0, 00:14:54.505 "crdt3": 0 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_create_transport", 00:14:54.505 "params": { 00:14:54.505 "trtype": "TCP", 00:14:54.505 "max_queue_depth": 128, 00:14:54.505 "max_io_qpairs_per_ctrlr": 127, 00:14:54.505 "in_capsule_data_size": 4096, 00:14:54.505 "max_io_size": 131072, 00:14:54.505 "io_unit_size": 131072, 00:14:54.505 "max_aq_depth": 128, 00:14:54.505 "num_shared_buffers": 511, 00:14:54.505 "buf_cache_size": 4294967295, 00:14:54.505 "dif_insert_or_strip": false, 00:14:54.505 "zcopy": false, 00:14:54.505 "c2h_success": false, 00:14:54.505 "sock_priority": 0, 00:14:54.505 "abort_timeout_sec": 1, 00:14:54.505 "ack_timeout": 0, 00:14:54.505 "data_wr_pool_size": 0 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_create_subsystem", 00:14:54.505 "params": { 00:14:54.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.505 "allow_any_host": false, 00:14:54.505 "serial_number": "00000000000000000000", 00:14:54.505 "model_number": "SPDK bdev Controller", 00:14:54.505 "max_namespaces": 32, 00:14:54.505 "min_cntlid": 1, 00:14:54.505 "max_cntlid": 65519, 00:14:54.505 "ana_reporting": false 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_subsystem_add_host", 00:14:54.505 "params": { 00:14:54.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.505 "host": "nqn.2016-06.io.spdk:host1", 00:14:54.505 "psk": "key0" 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_subsystem_add_ns", 00:14:54.505 "params": { 00:14:54.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.505 "namespace": { 00:14:54.505 "nsid": 1, 00:14:54.505 "bdev_name": "malloc0", 00:14:54.505 "nguid": "A245BFC344604211924DCFDBCE9F98C9", 00:14:54.505 "uuid": "a245bfc3-4460-4211-924d-cfdbce9f98c9", 00:14:54.505 "no_auto_visible": false 00:14:54.505 } 00:14:54.505 } 00:14:54.505 }, 00:14:54.505 { 00:14:54.505 "method": "nvmf_subsystem_add_listener", 00:14:54.505 "params": { 00:14:54.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.505 "listen_address": { 00:14:54.505 "trtype": "TCP", 00:14:54.505 "adrfam": "IPv4", 00:14:54.505 "traddr": "10.0.0.2", 00:14:54.505 "trsvcid": "4420" 00:14:54.505 }, 00:14:54.505 "secure_channel": true 00:14:54.505 } 00:14:54.505 } 00:14:54.505 ] 00:14:54.505 } 00:14:54.505 ] 00:14:54.505 }' 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74173 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74173 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74173 ']' 00:14:54.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.505 22:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.764 [2024-07-15 22:45:10.161943] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:54.764 [2024-07-15 22:45:10.162452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.764 [2024-07-15 22:45:10.309906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.022 [2024-07-15 22:45:10.461323] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.022 [2024-07-15 22:45:10.461402] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.022 [2024-07-15 22:45:10.461431] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.022 [2024-07-15 22:45:10.461440] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.022 [2024-07-15 22:45:10.461449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.022 [2024-07-15 22:45:10.461556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.281 [2024-07-15 22:45:10.653049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.281 [2024-07-15 22:45:10.749334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.281 [2024-07-15 22:45:10.781294] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:55.281 [2024-07-15 22:45:10.781685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.539 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.539 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:55.539 22:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.539 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.539 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74205 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74205 /var/tmp/bdevperf.sock 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74205 ']' 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:55.798 22:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:55.798 "subsystems": [ 00:14:55.798 { 00:14:55.798 "subsystem": "keyring", 00:14:55.798 "config": [ 00:14:55.798 { 00:14:55.798 "method": "keyring_file_add_key", 00:14:55.798 "params": { 00:14:55.798 "name": "key0", 00:14:55.798 "path": "/tmp/tmp.UUWNrq9s03" 00:14:55.798 } 00:14:55.798 } 00:14:55.798 ] 00:14:55.798 }, 00:14:55.798 { 00:14:55.798 "subsystem": "iobuf", 00:14:55.798 "config": [ 00:14:55.798 { 00:14:55.798 "method": "iobuf_set_options", 00:14:55.798 "params": { 00:14:55.798 "small_pool_count": 8192, 00:14:55.798 "large_pool_count": 1024, 00:14:55.798 "small_bufsize": 8192, 00:14:55.798 "large_bufsize": 135168 00:14:55.798 } 00:14:55.798 } 00:14:55.798 ] 00:14:55.798 }, 00:14:55.798 { 00:14:55.798 "subsystem": "sock", 00:14:55.798 "config": [ 00:14:55.798 { 00:14:55.798 "method": "sock_set_default_impl", 00:14:55.798 "params": { 00:14:55.799 "impl_name": "uring" 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "sock_impl_set_options", 00:14:55.799 "params": { 00:14:55.799 "impl_name": "ssl", 00:14:55.799 "recv_buf_size": 4096, 00:14:55.799 "send_buf_size": 4096, 00:14:55.799 "enable_recv_pipe": true, 00:14:55.799 "enable_quickack": false, 00:14:55.799 "enable_placement_id": 0, 00:14:55.799 "enable_zerocopy_send_server": true, 00:14:55.799 "enable_zerocopy_send_client": false, 00:14:55.799 "zerocopy_threshold": 0, 00:14:55.799 "tls_version": 0, 00:14:55.799 "enable_ktls": false 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "sock_impl_set_options", 00:14:55.799 "params": { 00:14:55.799 "impl_name": "posix", 00:14:55.799 "recv_buf_size": 2097152, 00:14:55.799 "send_buf_size": 2097152, 00:14:55.799 "enable_recv_pipe": true, 00:14:55.799 "enable_quickack": false, 00:14:55.799 "enable_placement_id": 0, 00:14:55.799 "enable_zerocopy_send_server": true, 00:14:55.799 "enable_zerocopy_send_client": false, 00:14:55.799 "zerocopy_threshold": 0, 00:14:55.799 "tls_version": 0, 00:14:55.799 "enable_ktls": false 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "sock_impl_set_options", 00:14:55.799 "params": { 00:14:55.799 "impl_name": "uring", 00:14:55.799 "recv_buf_size": 2097152, 00:14:55.799 "send_buf_size": 2097152, 00:14:55.799 "enable_recv_pipe": true, 00:14:55.799 "enable_quickack": false, 00:14:55.799 "enable_placement_id": 0, 00:14:55.799 "enable_zerocopy_send_server": false, 00:14:55.799 "enable_zerocopy_send_client": false, 00:14:55.799 "zerocopy_threshold": 0, 00:14:55.799 "tls_version": 0, 00:14:55.799 "enable_ktls": false 00:14:55.799 } 00:14:55.799 } 00:14:55.799 ] 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "subsystem": "vmd", 00:14:55.799 "config": [] 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "subsystem": "accel", 00:14:55.799 "config": [ 00:14:55.799 { 00:14:55.799 "method": "accel_set_options", 00:14:55.799 "params": { 00:14:55.799 "small_cache_size": 128, 00:14:55.799 "large_cache_size": 16, 00:14:55.799 "task_count": 2048, 00:14:55.799 "sequence_count": 2048, 00:14:55.799 "buf_count": 2048 00:14:55.799 } 00:14:55.799 } 00:14:55.799 ] 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "subsystem": "bdev", 00:14:55.799 "config": [ 00:14:55.799 { 00:14:55.799 "method": "bdev_set_options", 00:14:55.799 "params": { 00:14:55.799 "bdev_io_pool_size": 65535, 00:14:55.799 "bdev_io_cache_size": 256, 00:14:55.799 "bdev_auto_examine": true, 00:14:55.799 "iobuf_small_cache_size": 128, 00:14:55.799 "iobuf_large_cache_size": 16 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_raid_set_options", 00:14:55.799 "params": { 00:14:55.799 "process_window_size_kb": 1024 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_iscsi_set_options", 00:14:55.799 "params": { 00:14:55.799 "timeout_sec": 30 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_nvme_set_options", 00:14:55.799 "params": { 00:14:55.799 "action_on_timeout": "none", 00:14:55.799 "timeout_us": 0, 00:14:55.799 "timeout_admin_us": 0, 00:14:55.799 "keep_alive_timeout_ms": 10000, 00:14:55.799 "arbitration_burst": 0, 00:14:55.799 "low_priority_weight": 0, 00:14:55.799 "medium_priority_weight": 0, 00:14:55.799 "high_priority_weight": 0, 00:14:55.799 "nvme_adminq_poll_period_us": 10000, 00:14:55.799 "nvme_ioq_poll_period_us": 0, 00:14:55.799 "io_queue_requests": 512, 00:14:55.799 "delay_cmd_submit": true, 00:14:55.799 "transport_retry_count": 4, 00:14:55.799 "bdev_retry_count": 3, 00:14:55.799 "transport_ack_timeout": 0, 00:14:55.799 "ctrlr_loss_timeout_sec": 0, 00:14:55.799 "reconnect_delay_sec": 0, 00:14:55.799 "fast_io_fail_timeout_sec": 0, 00:14:55.799 "disable_auto_failback": false, 00:14:55.799 "generate_uuids": false, 00:14:55.799 "transport_tos": 0, 00:14:55.799 "nvme_error_stat": false, 00:14:55.799 "rdma_srq_size": 0, 00:14:55.799 "io_path_stat": false, 00:14:55.799 "allow_accel_sequence": false, 00:14:55.799 "rdma_max_cq_size": 0, 00:14:55.799 "rdma_cm_event_timeout_ms": 0, 00:14:55.799 "dhchap_digests": [ 00:14:55.799 "sha256", 00:14:55.799 "sha384", 00:14:55.799 "sha512" 00:14:55.799 ], 00:14:55.799 "dhchap_dhgroups": [ 00:14:55.799 "null", 00:14:55.799 "ffdhe2048", 00:14:55.799 "ffdhe3072", 00:14:55.799 "ffdhe4096", 00:14:55.799 "ffdhe6144", 00:14:55.799 "ffdhe8192" 00:14:55.799 ] 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_nvme_attach_controller", 00:14:55.799 "params": { 00:14:55.799 "name": "nvme0", 00:14:55.799 "trtype": "TCP", 00:14:55.799 "adrfam": "IPv4", 00:14:55.799 "traddr": "10.0.0.2", 00:14:55.799 "trsvcid": "4420", 00:14:55.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.799 "prchk_reftag": false, 00:14:55.799 "prchk_guard": false, 00:14:55.799 "ctrlr_loss_timeout_sec": 0, 00:14:55.799 "reconnect_delay_sec": 0, 00:14:55.799 "fast_io_fail_timeout_sec": 0, 00:14:55.799 "psk": "key0", 00:14:55.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.799 "hdgst": false, 00:14:55.799 "ddgst": false 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_nvme_set_hotplug", 00:14:55.799 "params": { 00:14:55.799 "period_us": 100000, 00:14:55.799 "enable": false 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_enable_histogram", 00:14:55.799 "params": { 00:14:55.799 "name": "nvme0n1", 00:14:55.799 "enable": true 00:14:55.799 } 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "method": "bdev_wait_for_examine" 00:14:55.799 } 00:14:55.799 ] 00:14:55.799 }, 00:14:55.799 { 00:14:55.799 "subsystem": "nbd", 00:14:55.799 "config": [] 00:14:55.799 } 00:14:55.799 ] 00:14:55.799 }' 00:14:55.799 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.799 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.799 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.799 22:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.799 [2024-07-15 22:45:11.195412] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:55.799 [2024-07-15 22:45:11.195744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74205 ] 00:14:55.799 [2024-07-15 22:45:11.335067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.058 [2024-07-15 22:45:11.462092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.058 [2024-07-15 22:45:11.597768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.316 [2024-07-15 22:45:11.645266] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.883 22:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.883 22:45:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.883 22:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:56.883 22:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:57.140 22:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.140 22:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:57.140 Running I/O for 1 seconds... 00:14:58.072 00:14:58.072 Latency(us) 00:14:58.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.072 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.072 Verification LBA range: start 0x0 length 0x2000 00:14:58.072 nvme0n1 : 1.02 3128.93 12.22 0.00 0.00 40362.33 8996.31 25022.84 00:14:58.072 =================================================================================================================== 00:14:58.072 Total : 3128.93 12.22 0.00 0.00 40362.33 8996.31 25022.84 00:14:58.072 0 00:14:58.072 22:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:58.072 22:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:58.072 22:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:58.072 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:58.072 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:58.072 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:58.329 nvmf_trace.0 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74205 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74205 ']' 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74205 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74205 00:14:58.329 killing process with pid 74205 00:14:58.329 Received shutdown signal, test time was about 1.000000 seconds 00:14:58.329 00:14:58.329 Latency(us) 00:14:58.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.329 =================================================================================================================== 00:14:58.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74205' 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74205 00:14:58.329 22:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74205 00:14:58.586 22:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:58.586 22:45:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.586 22:45:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.586 rmmod nvme_tcp 00:14:58.586 rmmod nvme_fabrics 00:14:58.586 rmmod nvme_keyring 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74173 ']' 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74173 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74173 ']' 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74173 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74173 00:14:58.586 killing process with pid 74173 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74173' 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74173 00:14:58.586 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74173 00:14:58.843 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.843 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GUxXBi5xlU /tmp/tmp.uas8079YLd /tmp/tmp.UUWNrq9s03 00:14:59.101 00:14:59.101 real 1m27.921s 00:14:59.101 user 2m19.543s 00:14:59.101 sys 0m28.434s 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.101 ************************************ 00:14:59.101 END TEST nvmf_tls 00:14:59.101 ************************************ 00:14:59.101 22:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.101 22:45:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.101 22:45:14 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:59.101 22:45:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.101 22:45:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.101 22:45:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.101 ************************************ 00:14:59.101 START TEST nvmf_fips 00:14:59.101 ************************************ 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:59.101 * Looking for test storage... 00:14:59.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:59.101 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:59.358 Error setting digest 00:14:59.358 0092C35B5D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:59.358 0092C35B5D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.358 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:59.359 Cannot find device "nvmf_tgt_br" 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.359 Cannot find device "nvmf_tgt_br2" 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:59.359 Cannot find device "nvmf_tgt_br" 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:59.359 Cannot find device "nvmf_tgt_br2" 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:59.359 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.616 22:45:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:59.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:59.616 00:14:59.616 --- 10.0.0.2 ping statistics --- 00:14:59.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.616 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:59.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:14:59.616 00:14:59.616 --- 10.0.0.3 ping statistics --- 00:14:59.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.616 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:59.616 00:14:59.616 --- 10.0.0.1 ping statistics --- 00:14:59.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.616 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74482 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74482 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74482 ']' 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.616 22:45:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:59.874 [2024-07-15 22:45:15.253243] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:14:59.874 [2024-07-15 22:45:15.253358] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.874 [2024-07-15 22:45:15.392160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.130 [2024-07-15 22:45:15.515499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.130 [2024-07-15 22:45:15.515585] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.130 [2024-07-15 22:45:15.515600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.130 [2024-07-15 22:45:15.515611] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.130 [2024-07-15 22:45:15.515620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.130 [2024-07-15 22:45:15.515656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.130 [2024-07-15 22:45:15.573599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:00.693 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.963 [2024-07-15 22:45:16.504489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.963 [2024-07-15 22:45:16.520440] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.963 [2024-07-15 22:45:16.520629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.250 [2024-07-15 22:45:16.551625] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:01.250 malloc0 00:15:01.250 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:01.250 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74522 00:15:01.250 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:01.250 22:45:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74522 /var/tmp/bdevperf.sock 00:15:01.251 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74522 ']' 00:15:01.251 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.251 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.251 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.251 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.251 22:45:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:01.251 [2024-07-15 22:45:16.648336] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:01.251 [2024-07-15 22:45:16.648425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74522 ] 00:15:01.251 [2024-07-15 22:45:16.783121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.508 [2024-07-15 22:45:16.905749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.509 [2024-07-15 22:45:16.960071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.074 22:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.074 22:45:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:02.074 22:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:02.332 [2024-07-15 22:45:17.822472] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:02.332 [2024-07-15 22:45:17.822599] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:02.332 TLSTESTn1 00:15:02.591 22:45:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.591 Running I/O for 10 seconds... 00:15:12.625 00:15:12.625 Latency(us) 00:15:12.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:12.625 Verification LBA range: start 0x0 length 0x2000 00:15:12.625 TLSTESTn1 : 10.02 3960.15 15.47 0.00 0.00 32259.21 6434.44 33602.09 00:15:12.625 =================================================================================================================== 00:15:12.625 Total : 3960.15 15.47 0.00 0.00 32259.21 6434.44 33602.09 00:15:12.625 0 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:12.625 nvmf_trace.0 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74522 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74522 ']' 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74522 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74522 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:12.625 killing process with pid 74522 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74522' 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74522 00:15:12.625 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.625 00:15:12.625 Latency(us) 00:15:12.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.625 =================================================================================================================== 00:15:12.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.625 [2024-07-15 22:45:28.163303] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:12.625 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74522 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.883 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.883 rmmod nvme_tcp 00:15:12.883 rmmod nvme_fabrics 00:15:13.142 rmmod nvme_keyring 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74482 ']' 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74482 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74482 ']' 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74482 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74482 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:13.142 killing process with pid 74482 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74482' 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74482 00:15:13.142 [2024-07-15 22:45:28.503661] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:13.142 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74482 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:13.401 00:15:13.401 real 0m14.275s 00:15:13.401 user 0m19.530s 00:15:13.401 sys 0m5.703s 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.401 22:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 ************************************ 00:15:13.401 END TEST nvmf_fips 00:15:13.401 ************************************ 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:13.401 22:45:28 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:13.401 22:45:28 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:13.401 22:45:28 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 22:45:28 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 22:45:28 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:13.401 22:45:28 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.401 22:45:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.401 ************************************ 00:15:13.401 START TEST nvmf_identify 00:15:13.401 ************************************ 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:13.401 * Looking for test storage... 00:15:13.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.401 22:45:28 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:13.660 22:45:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:13.660 Cannot find device "nvmf_tgt_br" 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.660 Cannot find device "nvmf_tgt_br2" 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:13.660 Cannot find device "nvmf_tgt_br" 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:13.660 Cannot find device "nvmf_tgt_br2" 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:13.660 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:13.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:13.919 00:15:13.919 --- 10.0.0.2 ping statistics --- 00:15:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.919 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:13.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:13.919 00:15:13.919 --- 10.0.0.3 ping statistics --- 00:15:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.919 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:13.919 00:15:13.919 --- 10.0.0.1 ping statistics --- 00:15:13.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.919 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74862 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74862 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74862 ']' 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.919 22:45:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:13.920 [2024-07-15 22:45:29.366366] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:13.920 [2024-07-15 22:45:29.366464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.179 [2024-07-15 22:45:29.509404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.179 [2024-07-15 22:45:29.635922] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.179 [2024-07-15 22:45:29.636189] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.179 [2024-07-15 22:45:29.636319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.179 [2024-07-15 22:45:29.636411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.179 [2024-07-15 22:45:29.636492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.179 [2024-07-15 22:45:29.636741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.179 [2024-07-15 22:45:29.636805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.179 [2024-07-15 22:45:29.638419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.179 [2024-07-15 22:45:29.638469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.179 [2024-07-15 22:45:29.694813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.114 [2024-07-15 22:45:30.413098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.114 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.114 Malloc0 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 [2024-07-15 22:45:30.515126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 [ 00:15:15.115 { 00:15:15.115 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.115 "subtype": "Discovery", 00:15:15.115 "listen_addresses": [ 00:15:15.115 { 00:15:15.115 "trtype": "TCP", 00:15:15.115 "adrfam": "IPv4", 00:15:15.115 "traddr": "10.0.0.2", 00:15:15.115 "trsvcid": "4420" 00:15:15.115 } 00:15:15.115 ], 00:15:15.115 "allow_any_host": true, 00:15:15.115 "hosts": [] 00:15:15.115 }, 00:15:15.115 { 00:15:15.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.115 "subtype": "NVMe", 00:15:15.115 "listen_addresses": [ 00:15:15.115 { 00:15:15.115 "trtype": "TCP", 00:15:15.115 "adrfam": "IPv4", 00:15:15.115 "traddr": "10.0.0.2", 00:15:15.115 "trsvcid": "4420" 00:15:15.115 } 00:15:15.115 ], 00:15:15.115 "allow_any_host": true, 00:15:15.115 "hosts": [], 00:15:15.115 "serial_number": "SPDK00000000000001", 00:15:15.115 "model_number": "SPDK bdev Controller", 00:15:15.115 "max_namespaces": 32, 00:15:15.115 "min_cntlid": 1, 00:15:15.115 "max_cntlid": 65519, 00:15:15.115 "namespaces": [ 00:15:15.115 { 00:15:15.115 "nsid": 1, 00:15:15.115 "bdev_name": "Malloc0", 00:15:15.115 "name": "Malloc0", 00:15:15.115 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:15.115 "eui64": "ABCDEF0123456789", 00:15:15.115 "uuid": "cf8169e9-9aaa-414e-bed4-2d988852fb2d" 00:15:15.115 } 00:15:15.115 ] 00:15:15.115 } 00:15:15.115 ] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.115 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:15.115 [2024-07-15 22:45:30.567647] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:15.115 [2024-07-15 22:45:30.567868] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74897 ] 00:15:15.380 [2024-07-15 22:45:30.705901] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:15.380 [2024-07-15 22:45:30.705988] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:15.380 [2024-07-15 22:45:30.705996] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:15.380 [2024-07-15 22:45:30.706010] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:15.380 [2024-07-15 22:45:30.706018] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:15.380 [2024-07-15 22:45:30.706183] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:15.380 [2024-07-15 22:45:30.706236] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21e8a60 0 00:15:15.380 [2024-07-15 22:45:30.710586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:15.380 [2024-07-15 22:45:30.710612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:15.380 [2024-07-15 22:45:30.710621] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:15.380 [2024-07-15 22:45:30.710628] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:15.380 [2024-07-15 22:45:30.710685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.710694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.710698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.380 [2024-07-15 22:45:30.710718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:15.380 [2024-07-15 22:45:30.710749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.380 [2024-07-15 22:45:30.718598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.380 [2024-07-15 22:45:30.718627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.380 [2024-07-15 22:45:30.718633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.380 [2024-07-15 22:45:30.718650] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:15.380 [2024-07-15 22:45:30.718660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:15.380 [2024-07-15 22:45:30.718667] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:15.380 [2024-07-15 22:45:30.718685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.380 [2024-07-15 22:45:30.718707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.380 [2024-07-15 22:45:30.718738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.380 [2024-07-15 22:45:30.718813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.380 [2024-07-15 22:45:30.718821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.380 [2024-07-15 22:45:30.718825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.380 [2024-07-15 22:45:30.718836] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:15.380 [2024-07-15 22:45:30.718844] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:15.380 [2024-07-15 22:45:30.718852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.380 [2024-07-15 22:45:30.718869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.380 [2024-07-15 22:45:30.718888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.380 [2024-07-15 22:45:30.718935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.380 [2024-07-15 22:45:30.718942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.380 [2024-07-15 22:45:30.718946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.380 [2024-07-15 22:45:30.718957] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:15.380 [2024-07-15 22:45:30.718966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:15.380 [2024-07-15 22:45:30.718974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.718983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.380 [2024-07-15 22:45:30.718990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.380 [2024-07-15 22:45:30.719008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.380 [2024-07-15 22:45:30.719051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.380 [2024-07-15 22:45:30.719058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.380 [2024-07-15 22:45:30.719062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.719066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.380 [2024-07-15 22:45:30.719072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:15.380 [2024-07-15 22:45:30.719084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.719089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.719093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.380 [2024-07-15 22:45:30.719100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.380 [2024-07-15 22:45:30.719117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.380 [2024-07-15 22:45:30.719163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.380 [2024-07-15 22:45:30.719170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.380 [2024-07-15 22:45:30.719174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.719178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.380 [2024-07-15 22:45:30.719183] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:15.380 [2024-07-15 22:45:30.719189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:15.380 [2024-07-15 22:45:30.719197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:15.380 [2024-07-15 22:45:30.719303] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:15.380 [2024-07-15 22:45:30.719318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:15.380 [2024-07-15 22:45:30.719329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.719334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.380 [2024-07-15 22:45:30.719346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.719354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.381 [2024-07-15 22:45:30.719374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.381 [2024-07-15 22:45:30.719425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.381 [2024-07-15 22:45:30.719436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.381 [2024-07-15 22:45:30.719441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.381 [2024-07-15 22:45:30.719451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:15.381 [2024-07-15 22:45:30.719462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.719479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.381 [2024-07-15 22:45:30.719497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.381 [2024-07-15 22:45:30.719540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.381 [2024-07-15 22:45:30.719547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.381 [2024-07-15 22:45:30.719551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.381 [2024-07-15 22:45:30.719573] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:15.381 [2024-07-15 22:45:30.719580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:15.381 [2024-07-15 22:45:30.719589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:15.381 [2024-07-15 22:45:30.719601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:15.381 [2024-07-15 22:45:30.719613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.719626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.381 [2024-07-15 22:45:30.719647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.381 [2024-07-15 22:45:30.719744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.381 [2024-07-15 22:45:30.719752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.381 [2024-07-15 22:45:30.719755] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719760] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e8a60): datao=0, datal=4096, cccid=0 00:15:15.381 [2024-07-15 22:45:30.719765] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x222b840) on tqpair(0x21e8a60): expected_datao=0, payload_size=4096 00:15:15.381 [2024-07-15 22:45:30.719770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719779] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719784] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.381 [2024-07-15 22:45:30.719799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.381 [2024-07-15 22:45:30.719803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.381 [2024-07-15 22:45:30.719817] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:15.381 [2024-07-15 22:45:30.719823] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:15.381 [2024-07-15 22:45:30.719828] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:15.381 [2024-07-15 22:45:30.719834] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:15.381 [2024-07-15 22:45:30.719839] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:15.381 [2024-07-15 22:45:30.719844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:15.381 [2024-07-15 22:45:30.719857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:15.381 [2024-07-15 22:45:30.719872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.719889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.381 [2024-07-15 22:45:30.719908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.381 [2024-07-15 22:45:30.719965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.381 [2024-07-15 22:45:30.719972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.381 [2024-07-15 22:45:30.719976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.381 [2024-07-15 22:45:30.719989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.719998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.720004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.381 [2024-07-15 22:45:30.720011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.720026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.381 [2024-07-15 22:45:30.720033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.720047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.381 [2024-07-15 22:45:30.720054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.720068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.381 [2024-07-15 22:45:30.720073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:15.381 [2024-07-15 22:45:30.720086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:15.381 [2024-07-15 22:45:30.720095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.720106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.381 [2024-07-15 22:45:30.720126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b840, cid 0, qid 0 00:15:15.381 [2024-07-15 22:45:30.720133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222b9c0, cid 1, qid 0 00:15:15.381 [2024-07-15 22:45:30.720138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bb40, cid 2, qid 0 00:15:15.381 [2024-07-15 22:45:30.720143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.381 [2024-07-15 22:45:30.720148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222be40, cid 4, qid 0 00:15:15.381 [2024-07-15 22:45:30.720237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.381 [2024-07-15 22:45:30.720244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.381 [2024-07-15 22:45:30.720248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222be40) on tqpair=0x21e8a60 00:15:15.381 [2024-07-15 22:45:30.720257] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:15.381 [2024-07-15 22:45:30.720263] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:15.381 [2024-07-15 22:45:30.720275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e8a60) 00:15:15.381 [2024-07-15 22:45:30.720302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.381 [2024-07-15 22:45:30.720322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222be40, cid 4, qid 0 00:15:15.381 [2024-07-15 22:45:30.720380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.381 [2024-07-15 22:45:30.720387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.381 [2024-07-15 22:45:30.720396] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720400] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e8a60): datao=0, datal=4096, cccid=4 00:15:15.381 [2024-07-15 22:45:30.720405] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x222be40) on tqpair(0x21e8a60): expected_datao=0, payload_size=4096 00:15:15.381 [2024-07-15 22:45:30.720409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720417] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720421] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.381 [2024-07-15 22:45:30.720435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.381 [2024-07-15 22:45:30.720439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.381 [2024-07-15 22:45:30.720443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222be40) on tqpair=0x21e8a60 00:15:15.381 [2024-07-15 22:45:30.720458] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:15.382 [2024-07-15 22:45:30.720492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e8a60) 00:15:15.382 [2024-07-15 22:45:30.720506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.382 [2024-07-15 22:45:30.720514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21e8a60) 00:15:15.382 [2024-07-15 22:45:30.720529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.382 [2024-07-15 22:45:30.720553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222be40, cid 4, qid 0 00:15:15.382 [2024-07-15 22:45:30.720572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bfc0, cid 5, qid 0 00:15:15.382 [2024-07-15 22:45:30.720683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.382 [2024-07-15 22:45:30.720699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.382 [2024-07-15 22:45:30.720703] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720707] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e8a60): datao=0, datal=1024, cccid=4 00:15:15.382 [2024-07-15 22:45:30.720713] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x222be40) on tqpair(0x21e8a60): expected_datao=0, payload_size=1024 00:15:15.382 [2024-07-15 22:45:30.720717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720725] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720729] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.382 [2024-07-15 22:45:30.720741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.382 [2024-07-15 22:45:30.720745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bfc0) on tqpair=0x21e8a60 00:15:15.382 [2024-07-15 22:45:30.720768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.382 [2024-07-15 22:45:30.720776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.382 [2024-07-15 22:45:30.720780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222be40) on tqpair=0x21e8a60 00:15:15.382 [2024-07-15 22:45:30.720798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e8a60) 00:15:15.382 [2024-07-15 22:45:30.720811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.382 [2024-07-15 22:45:30.720836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222be40, cid 4, qid 0 00:15:15.382 [2024-07-15 22:45:30.720904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.382 [2024-07-15 22:45:30.720919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.382 [2024-07-15 22:45:30.720924] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720928] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e8a60): datao=0, datal=3072, cccid=4 00:15:15.382 [2024-07-15 22:45:30.720933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x222be40) on tqpair(0x21e8a60): expected_datao=0, payload_size=3072 00:15:15.382 [2024-07-15 22:45:30.720938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720949] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.382 [2024-07-15 22:45:30.720964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.382 [2024-07-15 22:45:30.720968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222be40) on tqpair=0x21e8a60 00:15:15.382 [2024-07-15 22:45:30.720983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.720988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e8a60) 00:15:15.382 [2024-07-15 22:45:30.720995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.382 [2024-07-15 22:45:30.721019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222be40, cid 4, qid 0 00:15:15.382 [2024-07-15 22:45:30.721081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.382 [2024-07-15 22:45:30.721088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.382 [2024-07-15 22:45:30.721092] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.721096] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e8a60): datao=0, datal=8, cccid=4 00:15:15.382 [2024-07-15 22:45:30.721101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x222be40) on tqpair(0x21e8a60): expected_datao=0, payload_size=8 00:15:15.382 [2024-07-15 22:45:30.721106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.721113] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.721117] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.721131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.382 [2024-07-15 22:45:30.721139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.382 [2024-07-15 22:45:30.721143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.382 [2024-07-15 22:45:30.721147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222be40) on tqpair=0x21e8a60 00:15:15.382 ===================================================== 00:15:15.382 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:15.382 ===================================================== 00:15:15.382 Controller Capabilities/Features 00:15:15.382 ================================ 00:15:15.382 Vendor ID: 0000 00:15:15.382 Subsystem Vendor ID: 0000 00:15:15.382 Serial Number: .................... 00:15:15.382 Model Number: ........................................ 00:15:15.382 Firmware Version: 24.09 00:15:15.382 Recommended Arb Burst: 0 00:15:15.382 IEEE OUI Identifier: 00 00 00 00:15:15.382 Multi-path I/O 00:15:15.382 May have multiple subsystem ports: No 00:15:15.382 May have multiple controllers: No 00:15:15.382 Associated with SR-IOV VF: No 00:15:15.382 Max Data Transfer Size: 131072 00:15:15.382 Max Number of Namespaces: 0 00:15:15.382 Max Number of I/O Queues: 1024 00:15:15.382 NVMe Specification Version (VS): 1.3 00:15:15.382 NVMe Specification Version (Identify): 1.3 00:15:15.382 Maximum Queue Entries: 128 00:15:15.382 Contiguous Queues Required: Yes 00:15:15.382 Arbitration Mechanisms Supported 00:15:15.382 Weighted Round Robin: Not Supported 00:15:15.382 Vendor Specific: Not Supported 00:15:15.382 Reset Timeout: 15000 ms 00:15:15.382 Doorbell Stride: 4 bytes 00:15:15.382 NVM Subsystem Reset: Not Supported 00:15:15.382 Command Sets Supported 00:15:15.382 NVM Command Set: Supported 00:15:15.382 Boot Partition: Not Supported 00:15:15.382 Memory Page Size Minimum: 4096 bytes 00:15:15.382 Memory Page Size Maximum: 4096 bytes 00:15:15.382 Persistent Memory Region: Not Supported 00:15:15.382 Optional Asynchronous Events Supported 00:15:15.382 Namespace Attribute Notices: Not Supported 00:15:15.382 Firmware Activation Notices: Not Supported 00:15:15.382 ANA Change Notices: Not Supported 00:15:15.382 PLE Aggregate Log Change Notices: Not Supported 00:15:15.382 LBA Status Info Alert Notices: Not Supported 00:15:15.382 EGE Aggregate Log Change Notices: Not Supported 00:15:15.382 Normal NVM Subsystem Shutdown event: Not Supported 00:15:15.382 Zone Descriptor Change Notices: Not Supported 00:15:15.382 Discovery Log Change Notices: Supported 00:15:15.382 Controller Attributes 00:15:15.382 128-bit Host Identifier: Not Supported 00:15:15.382 Non-Operational Permissive Mode: Not Supported 00:15:15.382 NVM Sets: Not Supported 00:15:15.382 Read Recovery Levels: Not Supported 00:15:15.382 Endurance Groups: Not Supported 00:15:15.382 Predictable Latency Mode: Not Supported 00:15:15.382 Traffic Based Keep ALive: Not Supported 00:15:15.382 Namespace Granularity: Not Supported 00:15:15.382 SQ Associations: Not Supported 00:15:15.382 UUID List: Not Supported 00:15:15.382 Multi-Domain Subsystem: Not Supported 00:15:15.382 Fixed Capacity Management: Not Supported 00:15:15.382 Variable Capacity Management: Not Supported 00:15:15.382 Delete Endurance Group: Not Supported 00:15:15.382 Delete NVM Set: Not Supported 00:15:15.382 Extended LBA Formats Supported: Not Supported 00:15:15.382 Flexible Data Placement Supported: Not Supported 00:15:15.382 00:15:15.382 Controller Memory Buffer Support 00:15:15.382 ================================ 00:15:15.382 Supported: No 00:15:15.382 00:15:15.382 Persistent Memory Region Support 00:15:15.382 ================================ 00:15:15.382 Supported: No 00:15:15.382 00:15:15.382 Admin Command Set Attributes 00:15:15.382 ============================ 00:15:15.382 Security Send/Receive: Not Supported 00:15:15.382 Format NVM: Not Supported 00:15:15.382 Firmware Activate/Download: Not Supported 00:15:15.382 Namespace Management: Not Supported 00:15:15.382 Device Self-Test: Not Supported 00:15:15.382 Directives: Not Supported 00:15:15.382 NVMe-MI: Not Supported 00:15:15.382 Virtualization Management: Not Supported 00:15:15.382 Doorbell Buffer Config: Not Supported 00:15:15.382 Get LBA Status Capability: Not Supported 00:15:15.382 Command & Feature Lockdown Capability: Not Supported 00:15:15.382 Abort Command Limit: 1 00:15:15.382 Async Event Request Limit: 4 00:15:15.382 Number of Firmware Slots: N/A 00:15:15.382 Firmware Slot 1 Read-Only: N/A 00:15:15.382 Firmware Activation Without Reset: N/A 00:15:15.382 Multiple Update Detection Support: N/A 00:15:15.382 Firmware Update Granularity: No Information Provided 00:15:15.382 Per-Namespace SMART Log: No 00:15:15.382 Asymmetric Namespace Access Log Page: Not Supported 00:15:15.382 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:15.382 Command Effects Log Page: Not Supported 00:15:15.382 Get Log Page Extended Data: Supported 00:15:15.382 Telemetry Log Pages: Not Supported 00:15:15.382 Persistent Event Log Pages: Not Supported 00:15:15.382 Supported Log Pages Log Page: May Support 00:15:15.383 Commands Supported & Effects Log Page: Not Supported 00:15:15.383 Feature Identifiers & Effects Log Page:May Support 00:15:15.383 NVMe-MI Commands & Effects Log Page: May Support 00:15:15.383 Data Area 4 for Telemetry Log: Not Supported 00:15:15.383 Error Log Page Entries Supported: 128 00:15:15.383 Keep Alive: Not Supported 00:15:15.383 00:15:15.383 NVM Command Set Attributes 00:15:15.383 ========================== 00:15:15.383 Submission Queue Entry Size 00:15:15.383 Max: 1 00:15:15.383 Min: 1 00:15:15.383 Completion Queue Entry Size 00:15:15.383 Max: 1 00:15:15.383 Min: 1 00:15:15.383 Number of Namespaces: 0 00:15:15.383 Compare Command: Not Supported 00:15:15.383 Write Uncorrectable Command: Not Supported 00:15:15.383 Dataset Management Command: Not Supported 00:15:15.383 Write Zeroes Command: Not Supported 00:15:15.383 Set Features Save Field: Not Supported 00:15:15.383 Reservations: Not Supported 00:15:15.383 Timestamp: Not Supported 00:15:15.383 Copy: Not Supported 00:15:15.383 Volatile Write Cache: Not Present 00:15:15.383 Atomic Write Unit (Normal): 1 00:15:15.383 Atomic Write Unit (PFail): 1 00:15:15.383 Atomic Compare & Write Unit: 1 00:15:15.383 Fused Compare & Write: Supported 00:15:15.383 Scatter-Gather List 00:15:15.383 SGL Command Set: Supported 00:15:15.383 SGL Keyed: Supported 00:15:15.383 SGL Bit Bucket Descriptor: Not Supported 00:15:15.383 SGL Metadata Pointer: Not Supported 00:15:15.383 Oversized SGL: Not Supported 00:15:15.383 SGL Metadata Address: Not Supported 00:15:15.383 SGL Offset: Supported 00:15:15.383 Transport SGL Data Block: Not Supported 00:15:15.383 Replay Protected Memory Block: Not Supported 00:15:15.383 00:15:15.383 Firmware Slot Information 00:15:15.383 ========================= 00:15:15.383 Active slot: 0 00:15:15.383 00:15:15.383 00:15:15.383 Error Log 00:15:15.383 ========= 00:15:15.383 00:15:15.383 Active Namespaces 00:15:15.383 ================= 00:15:15.383 Discovery Log Page 00:15:15.383 ================== 00:15:15.383 Generation Counter: 2 00:15:15.383 Number of Records: 2 00:15:15.383 Record Format: 0 00:15:15.383 00:15:15.383 Discovery Log Entry 0 00:15:15.383 ---------------------- 00:15:15.383 Transport Type: 3 (TCP) 00:15:15.383 Address Family: 1 (IPv4) 00:15:15.383 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:15.383 Entry Flags: 00:15:15.383 Duplicate Returned Information: 1 00:15:15.383 Explicit Persistent Connection Support for Discovery: 1 00:15:15.383 Transport Requirements: 00:15:15.383 Secure Channel: Not Required 00:15:15.383 Port ID: 0 (0x0000) 00:15:15.383 Controller ID: 65535 (0xffff) 00:15:15.383 Admin Max SQ Size: 128 00:15:15.383 Transport Service Identifier: 4420 00:15:15.383 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:15.383 Transport Address: 10.0.0.2 00:15:15.383 Discovery Log Entry 1 00:15:15.383 ---------------------- 00:15:15.383 Transport Type: 3 (TCP) 00:15:15.383 Address Family: 1 (IPv4) 00:15:15.383 Subsystem Type: 2 (NVM Subsystem) 00:15:15.383 Entry Flags: 00:15:15.383 Duplicate Returned Information: 0 00:15:15.383 Explicit Persistent Connection Support for Discovery: 0 00:15:15.383 Transport Requirements: 00:15:15.383 Secure Channel: Not Required 00:15:15.383 Port ID: 0 (0x0000) 00:15:15.383 Controller ID: 65535 (0xffff) 00:15:15.383 Admin Max SQ Size: 128 00:15:15.383 Transport Service Identifier: 4420 00:15:15.383 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:15.383 Transport Address: 10.0.0.2 [2024-07-15 22:45:30.721245] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:15.383 [2024-07-15 22:45:30.721259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b840) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.383 [2024-07-15 22:45:30.721272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222b9c0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.383 [2024-07-15 22:45:30.721282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bb40) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.383 [2024-07-15 22:45:30.721292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.383 [2024-07-15 22:45:30.721307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.383 [2024-07-15 22:45:30.721324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.383 [2024-07-15 22:45:30.721347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.383 [2024-07-15 22:45:30.721394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.383 [2024-07-15 22:45:30.721406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.383 [2024-07-15 22:45:30.721410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.383 [2024-07-15 22:45:30.721440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.383 [2024-07-15 22:45:30.721463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.383 [2024-07-15 22:45:30.721524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.383 [2024-07-15 22:45:30.721530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.383 [2024-07-15 22:45:30.721534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721544] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:15.383 [2024-07-15 22:45:30.721549] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:15.383 [2024-07-15 22:45:30.721559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.383 [2024-07-15 22:45:30.721591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.383 [2024-07-15 22:45:30.721611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.383 [2024-07-15 22:45:30.721659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.383 [2024-07-15 22:45:30.721666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.383 [2024-07-15 22:45:30.721669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.383 [2024-07-15 22:45:30.721701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.383 [2024-07-15 22:45:30.721719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.383 [2024-07-15 22:45:30.721761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.383 [2024-07-15 22:45:30.721768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.383 [2024-07-15 22:45:30.721772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.383 [2024-07-15 22:45:30.721803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.383 [2024-07-15 22:45:30.721819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.383 [2024-07-15 22:45:30.721865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.383 [2024-07-15 22:45:30.721871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.383 [2024-07-15 22:45:30.721875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.383 [2024-07-15 22:45:30.721890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.383 [2024-07-15 22:45:30.721898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.383 [2024-07-15 22:45:30.721906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.383 [2024-07-15 22:45:30.721922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.721969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.721981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.721985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.721990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.722001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.722017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.722035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.722082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.722089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.722093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.722107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.722124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.722140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.722187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.722195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.722199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.722214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.722231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.722247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.722292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.722298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.722302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.722317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.722333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.722349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.722394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.722401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.722405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.722420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.722436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.722452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.722497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.722504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.722507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.722522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.722530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.722538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.722554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.723780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.723803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.723809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.723813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.723829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.723835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.723839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e8a60) 00:15:15.384 [2024-07-15 22:45:30.723848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.384 [2024-07-15 22:45:30.723875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x222bcc0, cid 3, qid 0 00:15:15.384 [2024-07-15 22:45:30.723926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.384 [2024-07-15 22:45:30.723933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.384 [2024-07-15 22:45:30.723937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.723941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x222bcc0) on tqpair=0x21e8a60 00:15:15.384 [2024-07-15 22:45:30.723950] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 2 milliseconds 00:15:15.384 00:15:15.384 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:15.384 [2024-07-15 22:45:30.765338] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:15.384 [2024-07-15 22:45:30.765407] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74905 ] 00:15:15.384 [2024-07-15 22:45:30.909797] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:15.384 [2024-07-15 22:45:30.909882] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:15.384 [2024-07-15 22:45:30.909890] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:15.384 [2024-07-15 22:45:30.909904] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:15.384 [2024-07-15 22:45:30.909913] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:15.384 [2024-07-15 22:45:30.910077] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:15.384 [2024-07-15 22:45:30.910131] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb95a60 0 00:15:15.384 [2024-07-15 22:45:30.922583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:15.384 [2024-07-15 22:45:30.922615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:15.384 [2024-07-15 22:45:30.922621] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:15.384 [2024-07-15 22:45:30.922625] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:15.384 [2024-07-15 22:45:30.922677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.384 [2024-07-15 22:45:30.922685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.922690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.922707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:15.385 [2024-07-15 22:45:30.922742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.930580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.930601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.930606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.930628] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:15.385 [2024-07-15 22:45:30.930637] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:15.385 [2024-07-15 22:45:30.930644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:15.385 [2024-07-15 22:45:30.930661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.930681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.930708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.930773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.930780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.930784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.930795] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:15.385 [2024-07-15 22:45:30.930803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:15.385 [2024-07-15 22:45:30.930811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.930827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.930846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.930897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.930904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.930908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.930919] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:15.385 [2024-07-15 22:45:30.930928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:15.385 [2024-07-15 22:45:30.930935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.930944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.930951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.930968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.931016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.931023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.931026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.931037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:15.385 [2024-07-15 22:45:30.931048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.931064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.931081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.931126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.931132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.931136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.931146] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:15.385 [2024-07-15 22:45:30.931152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:15.385 [2024-07-15 22:45:30.931160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:15.385 [2024-07-15 22:45:30.931267] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:15.385 [2024-07-15 22:45:30.931273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:15.385 [2024-07-15 22:45:30.931283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.931299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.931318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.931366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.931378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.931382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.931392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:15.385 [2024-07-15 22:45:30.931403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.931420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.931437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.931488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.931495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.931499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.931508] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:15.385 [2024-07-15 22:45:30.931513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:15.385 [2024-07-15 22:45:30.931522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:15.385 [2024-07-15 22:45:30.931533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:15.385 [2024-07-15 22:45:30.931544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.385 [2024-07-15 22:45:30.931556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.385 [2024-07-15 22:45:30.931590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.385 [2024-07-15 22:45:30.931690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.385 [2024-07-15 22:45:30.931697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.385 [2024-07-15 22:45:30.931701] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931706] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=4096, cccid=0 00:15:15.385 [2024-07-15 22:45:30.931711] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd8840) on tqpair(0xb95a60): expected_datao=0, payload_size=4096 00:15:15.385 [2024-07-15 22:45:30.931716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931725] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931730] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.385 [2024-07-15 22:45:30.931745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.385 [2024-07-15 22:45:30.931749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.385 [2024-07-15 22:45:30.931753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.385 [2024-07-15 22:45:30.931763] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:15.385 [2024-07-15 22:45:30.931769] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:15.385 [2024-07-15 22:45:30.931774] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:15.385 [2024-07-15 22:45:30.931779] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:15.385 [2024-07-15 22:45:30.931784] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:15.385 [2024-07-15 22:45:30.931789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:15.385 [2024-07-15 22:45:30.931804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:15.385 [2024-07-15 22:45:30.931813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.931830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.386 [2024-07-15 22:45:30.931849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.386 [2024-07-15 22:45:30.931903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.386 [2024-07-15 22:45:30.931909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.386 [2024-07-15 22:45:30.931913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.386 [2024-07-15 22:45:30.931926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.931941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.386 [2024-07-15 22:45:30.931948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.931962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.386 [2024-07-15 22:45:30.931969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.931983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.386 [2024-07-15 22:45:30.931995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.931999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.932009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.386 [2024-07-15 22:45:30.932014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.932054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.386 [2024-07-15 22:45:30.932074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8840, cid 0, qid 0 00:15:15.386 [2024-07-15 22:45:30.932081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd89c0, cid 1, qid 0 00:15:15.386 [2024-07-15 22:45:30.932086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8b40, cid 2, qid 0 00:15:15.386 [2024-07-15 22:45:30.932091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8cc0, cid 3, qid 0 00:15:15.386 [2024-07-15 22:45:30.932095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.386 [2024-07-15 22:45:30.932182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.386 [2024-07-15 22:45:30.932189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.386 [2024-07-15 22:45:30.932193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.386 [2024-07-15 22:45:30.932203] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:15.386 [2024-07-15 22:45:30.932208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.932251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.386 [2024-07-15 22:45:30.932269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.386 [2024-07-15 22:45:30.932335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.386 [2024-07-15 22:45:30.932342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.386 [2024-07-15 22:45:30.932347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.386 [2024-07-15 22:45:30.932414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.932447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.386 [2024-07-15 22:45:30.932466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.386 [2024-07-15 22:45:30.932528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.386 [2024-07-15 22:45:30.932535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.386 [2024-07-15 22:45:30.932539] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932543] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=4096, cccid=4 00:15:15.386 [2024-07-15 22:45:30.932548] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd8e40) on tqpair(0xb95a60): expected_datao=0, payload_size=4096 00:15:15.386 [2024-07-15 22:45:30.932553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932572] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932578] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.386 [2024-07-15 22:45:30.932593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.386 [2024-07-15 22:45:30.932597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.386 [2024-07-15 22:45:30.932620] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:15.386 [2024-07-15 22:45:30.932632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.932664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.386 [2024-07-15 22:45:30.932684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.386 [2024-07-15 22:45:30.932749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.386 [2024-07-15 22:45:30.932756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.386 [2024-07-15 22:45:30.932760] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932763] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=4096, cccid=4 00:15:15.386 [2024-07-15 22:45:30.932768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd8e40) on tqpair(0xb95a60): expected_datao=0, payload_size=4096 00:15:15.386 [2024-07-15 22:45:30.932773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932784] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.386 [2024-07-15 22:45:30.932799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.386 [2024-07-15 22:45:30.932802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.386 [2024-07-15 22:45:30.932819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:15.386 [2024-07-15 22:45:30.932839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.386 [2024-07-15 22:45:30.932851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.386 [2024-07-15 22:45:30.932869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.386 [2024-07-15 22:45:30.932931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.386 [2024-07-15 22:45:30.932938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.386 [2024-07-15 22:45:30.932942] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932946] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=4096, cccid=4 00:15:15.386 [2024-07-15 22:45:30.932951] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd8e40) on tqpair(0xb95a60): expected_datao=0, payload_size=4096 00:15:15.386 [2024-07-15 22:45:30.932955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932963] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932966] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.386 [2024-07-15 22:45:30.932975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.932982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.932985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.932989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.933003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933047] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:15.387 [2024-07-15 22:45:30.933052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:15.387 [2024-07-15 22:45:30.933057] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:15.387 [2024-07-15 22:45:30.933076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.387 [2024-07-15 22:45:30.933136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.387 [2024-07-15 22:45:30.933143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8fc0, cid 5, qid 0 00:15:15.387 [2024-07-15 22:45:30.933210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.933217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.933221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.933232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.933239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.933242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8fc0) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.933257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8fc0, cid 5, qid 0 00:15:15.387 [2024-07-15 22:45:30.933336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.933343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.933347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8fc0) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.933362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8fc0, cid 5, qid 0 00:15:15.387 [2024-07-15 22:45:30.933435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.933441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.933445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8fc0) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.933460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8fc0, cid 5, qid 0 00:15:15.387 [2024-07-15 22:45:30.933533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.933540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.933544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8fc0) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.933579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb95a60) 00:15:15.387 [2024-07-15 22:45:30.933652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.387 [2024-07-15 22:45:30.933673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8fc0, cid 5, qid 0 00:15:15.387 [2024-07-15 22:45:30.933680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8e40, cid 4, qid 0 00:15:15.387 [2024-07-15 22:45:30.933685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd9140, cid 6, qid 0 00:15:15.387 [2024-07-15 22:45:30.933690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd92c0, cid 7, qid 0 00:15:15.387 [2024-07-15 22:45:30.933826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.387 [2024-07-15 22:45:30.933833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.387 [2024-07-15 22:45:30.933837] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933841] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=8192, cccid=5 00:15:15.387 [2024-07-15 22:45:30.933845] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd8fc0) on tqpair(0xb95a60): expected_datao=0, payload_size=8192 00:15:15.387 [2024-07-15 22:45:30.933850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933867] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933872] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.387 [2024-07-15 22:45:30.933884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.387 [2024-07-15 22:45:30.933887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933891] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=512, cccid=4 00:15:15.387 [2024-07-15 22:45:30.933896] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd8e40) on tqpair(0xb95a60): expected_datao=0, payload_size=512 00:15:15.387 [2024-07-15 22:45:30.933900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933907] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.387 [2024-07-15 22:45:30.933923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.387 [2024-07-15 22:45:30.933926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933930] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=512, cccid=6 00:15:15.387 [2024-07-15 22:45:30.933935] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd9140) on tqpair(0xb95a60): expected_datao=0, payload_size=512 00:15:15.387 [2024-07-15 22:45:30.933939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933946] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933950] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:15.387 [2024-07-15 22:45:30.933961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:15.387 [2024-07-15 22:45:30.933965] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933969] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb95a60): datao=0, datal=4096, cccid=7 00:15:15.387 [2024-07-15 22:45:30.933973] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbd92c0) on tqpair(0xb95a60): expected_datao=0, payload_size=4096 00:15:15.387 [2024-07-15 22:45:30.933978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933985] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933989] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.933997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.934003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.934007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.934011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8fc0) on tqpair=0xb95a60 00:15:15.387 [2024-07-15 22:45:30.934029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.387 [2024-07-15 22:45:30.934036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.387 [2024-07-15 22:45:30.934040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.387 [2024-07-15 22:45:30.934044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8e40) on tqpair=0xb95a60 00:15:15.388 [2024-07-15 22:45:30.934058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.388 [2024-07-15 22:45:30.934064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.388 ===================================================== 00:15:15.388 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.388 ===================================================== 00:15:15.388 Controller Capabilities/Features 00:15:15.388 ================================ 00:15:15.388 Vendor ID: 8086 00:15:15.388 Subsystem Vendor ID: 8086 00:15:15.388 Serial Number: SPDK00000000000001 00:15:15.388 Model Number: SPDK bdev Controller 00:15:15.388 Firmware Version: 24.09 00:15:15.388 Recommended Arb Burst: 6 00:15:15.388 IEEE OUI Identifier: e4 d2 5c 00:15:15.388 Multi-path I/O 00:15:15.388 May have multiple subsystem ports: Yes 00:15:15.388 May have multiple controllers: Yes 00:15:15.388 Associated with SR-IOV VF: No 00:15:15.388 Max Data Transfer Size: 131072 00:15:15.388 Max Number of Namespaces: 32 00:15:15.388 Max Number of I/O Queues: 127 00:15:15.388 NVMe Specification Version (VS): 1.3 00:15:15.388 NVMe Specification Version (Identify): 1.3 00:15:15.388 Maximum Queue Entries: 128 00:15:15.388 Contiguous Queues Required: Yes 00:15:15.388 Arbitration Mechanisms Supported 00:15:15.388 Weighted Round Robin: Not Supported 00:15:15.388 Vendor Specific: Not Supported 00:15:15.388 Reset Timeout: 15000 ms 00:15:15.388 Doorbell Stride: 4 bytes 00:15:15.388 NVM Subsystem Reset: Not Supported 00:15:15.388 Command Sets Supported 00:15:15.388 NVM Command Set: Supported 00:15:15.388 Boot Partition: Not Supported 00:15:15.388 Memory Page Size Minimum: 4096 bytes 00:15:15.388 Memory Page Size Maximum: 4096 bytes 00:15:15.388 Persistent Memory Region: Not Supported 00:15:15.388 Optional Asynchronous Events Supported 00:15:15.388 Namespace Attribute Notices: Supported 00:15:15.388 Firmware Activation Notices: Not Supported 00:15:15.388 ANA Change Notices: Not Supported 00:15:15.388 PLE Aggregate Log Change Notices: Not Supported 00:15:15.388 LBA Status Info Alert Notices: Not Supported 00:15:15.388 EGE Aggregate Log Change Notices: Not Supported 00:15:15.388 Normal NVM Subsystem Shutdown event: Not Supported 00:15:15.388 Zone Descriptor Change Notices: Not Supported 00:15:15.388 Discovery Log Change Notices: Not Supported 00:15:15.388 Controller Attributes 00:15:15.388 128-bit Host Identifier: Supported 00:15:15.388 Non-Operational Permissive Mode: Not Supported 00:15:15.388 NVM Sets: Not Supported 00:15:15.388 Read Recovery Levels: Not Supported 00:15:15.388 Endurance Groups: Not Supported 00:15:15.388 Predictable Latency Mode: Not Supported 00:15:15.388 Traffic Based Keep ALive: Not Supported 00:15:15.388 Namespace Granularity: Not Supported 00:15:15.388 SQ Associations: Not Supported 00:15:15.388 UUID List: Not Supported 00:15:15.388 Multi-Domain Subsystem: Not Supported 00:15:15.388 Fixed Capacity Management: Not Supported 00:15:15.388 Variable Capacity Management: Not Supported 00:15:15.388 Delete Endurance Group: Not Supported 00:15:15.388 Delete NVM Set: Not Supported 00:15:15.388 Extended LBA Formats Supported: Not Supported 00:15:15.388 Flexible Data Placement Supported: Not Supported 00:15:15.388 00:15:15.388 Controller Memory Buffer Support 00:15:15.388 ================================ 00:15:15.388 Supported: No 00:15:15.388 00:15:15.388 Persistent Memory Region Support 00:15:15.388 ================================ 00:15:15.388 Supported: No 00:15:15.388 00:15:15.388 Admin Command Set Attributes 00:15:15.388 ============================ 00:15:15.388 Security Send/Receive: Not Supported 00:15:15.388 Format NVM: Not Supported 00:15:15.388 Firmware Activate/Download: Not Supported 00:15:15.388 Namespace Management: Not Supported 00:15:15.388 Device Self-Test: Not Supported 00:15:15.388 Directives: Not Supported 00:15:15.388 NVMe-MI: Not Supported 00:15:15.388 Virtualization Management: Not Supported 00:15:15.388 Doorbell Buffer Config: Not Supported 00:15:15.388 Get LBA Status Capability: Not Supported 00:15:15.388 Command & Feature Lockdown Capability: Not Supported 00:15:15.388 Abort Command Limit: 4 00:15:15.388 Async Event Request Limit: 4 00:15:15.388 Number of Firmware Slots: N/A 00:15:15.388 Firmware Slot 1 Read-Only: N/A 00:15:15.388 Firmware Activation Without Reset: [2024-07-15 22:45:30.934068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.388 [2024-07-15 22:45:30.934072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd9140) on tqpair=0xb95a60 00:15:15.388 [2024-07-15 22:45:30.934080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.388 [2024-07-15 22:45:30.934086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.388 [2024-07-15 22:45:30.934090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.388 [2024-07-15 22:45:30.934094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd92c0) on tqpair=0xb95a60 00:15:15.388 N/A 00:15:15.388 Multiple Update Detection Support: N/A 00:15:15.388 Firmware Update Granularity: No Information Provided 00:15:15.388 Per-Namespace SMART Log: No 00:15:15.388 Asymmetric Namespace Access Log Page: Not Supported 00:15:15.388 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:15.388 Command Effects Log Page: Supported 00:15:15.388 Get Log Page Extended Data: Supported 00:15:15.388 Telemetry Log Pages: Not Supported 00:15:15.388 Persistent Event Log Pages: Not Supported 00:15:15.388 Supported Log Pages Log Page: May Support 00:15:15.388 Commands Supported & Effects Log Page: Not Supported 00:15:15.388 Feature Identifiers & Effects Log Page:May Support 00:15:15.388 NVMe-MI Commands & Effects Log Page: May Support 00:15:15.388 Data Area 4 for Telemetry Log: Not Supported 00:15:15.388 Error Log Page Entries Supported: 128 00:15:15.388 Keep Alive: Supported 00:15:15.388 Keep Alive Granularity: 10000 ms 00:15:15.388 00:15:15.388 NVM Command Set Attributes 00:15:15.388 ========================== 00:15:15.388 Submission Queue Entry Size 00:15:15.388 Max: 64 00:15:15.388 Min: 64 00:15:15.388 Completion Queue Entry Size 00:15:15.388 Max: 16 00:15:15.388 Min: 16 00:15:15.388 Number of Namespaces: 32 00:15:15.388 Compare Command: Supported 00:15:15.388 Write Uncorrectable Command: Not Supported 00:15:15.388 Dataset Management Command: Supported 00:15:15.388 Write Zeroes Command: Supported 00:15:15.388 Set Features Save Field: Not Supported 00:15:15.388 Reservations: Supported 00:15:15.388 Timestamp: Not Supported 00:15:15.388 Copy: Supported 00:15:15.388 Volatile Write Cache: Present 00:15:15.388 Atomic Write Unit (Normal): 1 00:15:15.388 Atomic Write Unit (PFail): 1 00:15:15.388 Atomic Compare & Write Unit: 1 00:15:15.388 Fused Compare & Write: Supported 00:15:15.388 Scatter-Gather List 00:15:15.388 SGL Command Set: Supported 00:15:15.388 SGL Keyed: Supported 00:15:15.388 SGL Bit Bucket Descriptor: Not Supported 00:15:15.388 SGL Metadata Pointer: Not Supported 00:15:15.388 Oversized SGL: Not Supported 00:15:15.388 SGL Metadata Address: Not Supported 00:15:15.388 SGL Offset: Supported 00:15:15.388 Transport SGL Data Block: Not Supported 00:15:15.388 Replay Protected Memory Block: Not Supported 00:15:15.388 00:15:15.388 Firmware Slot Information 00:15:15.388 ========================= 00:15:15.388 Active slot: 1 00:15:15.388 Slot 1 Firmware Revision: 24.09 00:15:15.388 00:15:15.388 00:15:15.388 Commands Supported and Effects 00:15:15.388 ============================== 00:15:15.388 Admin Commands 00:15:15.388 -------------- 00:15:15.388 Get Log Page (02h): Supported 00:15:15.388 Identify (06h): Supported 00:15:15.388 Abort (08h): Supported 00:15:15.388 Set Features (09h): Supported 00:15:15.388 Get Features (0Ah): Supported 00:15:15.388 Asynchronous Event Request (0Ch): Supported 00:15:15.388 Keep Alive (18h): Supported 00:15:15.388 I/O Commands 00:15:15.388 ------------ 00:15:15.388 Flush (00h): Supported LBA-Change 00:15:15.388 Write (01h): Supported LBA-Change 00:15:15.388 Read (02h): Supported 00:15:15.388 Compare (05h): Supported 00:15:15.388 Write Zeroes (08h): Supported LBA-Change 00:15:15.388 Dataset Management (09h): Supported LBA-Change 00:15:15.388 Copy (19h): Supported LBA-Change 00:15:15.388 00:15:15.388 Error Log 00:15:15.388 ========= 00:15:15.388 00:15:15.388 Arbitration 00:15:15.388 =========== 00:15:15.388 Arbitration Burst: 1 00:15:15.388 00:15:15.388 Power Management 00:15:15.388 ================ 00:15:15.388 Number of Power States: 1 00:15:15.388 Current Power State: Power State #0 00:15:15.388 Power State #0: 00:15:15.388 Max Power: 0.00 W 00:15:15.388 Non-Operational State: Operational 00:15:15.388 Entry Latency: Not Reported 00:15:15.388 Exit Latency: Not Reported 00:15:15.388 Relative Read Throughput: 0 00:15:15.388 Relative Read Latency: 0 00:15:15.388 Relative Write Throughput: 0 00:15:15.388 Relative Write Latency: 0 00:15:15.389 Idle Power: Not Reported 00:15:15.389 Active Power: Not Reported 00:15:15.389 Non-Operational Permissive Mode: Not Supported 00:15:15.389 00:15:15.389 Health Information 00:15:15.389 ================== 00:15:15.389 Critical Warnings: 00:15:15.389 Available Spare Space: OK 00:15:15.389 Temperature: OK 00:15:15.389 Device Reliability: OK 00:15:15.389 Read Only: No 00:15:15.389 Volatile Memory Backup: OK 00:15:15.389 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:15.389 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:15.389 Available Spare: 0% 00:15:15.389 Available Spare Threshold: 0% 00:15:15.389 Life Percentage Used:[2024-07-15 22:45:30.934207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb95a60) 00:15:15.389 [2024-07-15 22:45:30.934223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.389 [2024-07-15 22:45:30.934245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd92c0, cid 7, qid 0 00:15:15.389 [2024-07-15 22:45:30.934297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.389 [2024-07-15 22:45:30.934304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.389 [2024-07-15 22:45:30.934308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd92c0) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.934353] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:15.389 [2024-07-15 22:45:30.934366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8840) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.934373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.389 [2024-07-15 22:45:30.934379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd89c0) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.934384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.389 [2024-07-15 22:45:30.934390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8b40) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.934395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.389 [2024-07-15 22:45:30.934400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8cc0) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.934405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.389 [2024-07-15 22:45:30.934415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb95a60) 00:15:15.389 [2024-07-15 22:45:30.934431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.389 [2024-07-15 22:45:30.934453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8cc0, cid 3, qid 0 00:15:15.389 [2024-07-15 22:45:30.934503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.389 [2024-07-15 22:45:30.934510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.389 [2024-07-15 22:45:30.934514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8cc0) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.934526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.934535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb95a60) 00:15:15.389 [2024-07-15 22:45:30.934542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.389 [2024-07-15 22:45:30.938573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8cc0, cid 3, qid 0 00:15:15.389 [2024-07-15 22:45:30.938596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.389 [2024-07-15 22:45:30.938604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.389 [2024-07-15 22:45:30.938608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.938612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8cc0) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.938619] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:15.389 [2024-07-15 22:45:30.938625] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:15.389 [2024-07-15 22:45:30.938639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.938644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.938648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb95a60) 00:15:15.389 [2024-07-15 22:45:30.938656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:15.389 [2024-07-15 22:45:30.938681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbd8cc0, cid 3, qid 0 00:15:15.389 [2024-07-15 22:45:30.938735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:15.389 [2024-07-15 22:45:30.938742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:15.389 [2024-07-15 22:45:30.938746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:15.389 [2024-07-15 22:45:30.938750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbd8cc0) on tqpair=0xb95a60 00:15:15.389 [2024-07-15 22:45:30.938759] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:15:15.648 0% 00:15:15.648 Data Units Read: 0 00:15:15.648 Data Units Written: 0 00:15:15.648 Host Read Commands: 0 00:15:15.648 Host Write Commands: 0 00:15:15.648 Controller Busy Time: 0 minutes 00:15:15.648 Power Cycles: 0 00:15:15.648 Power On Hours: 0 hours 00:15:15.648 Unsafe Shutdowns: 0 00:15:15.648 Unrecoverable Media Errors: 0 00:15:15.648 Lifetime Error Log Entries: 0 00:15:15.648 Warning Temperature Time: 0 minutes 00:15:15.648 Critical Temperature Time: 0 minutes 00:15:15.648 00:15:15.648 Number of Queues 00:15:15.648 ================ 00:15:15.648 Number of I/O Submission Queues: 127 00:15:15.648 Number of I/O Completion Queues: 127 00:15:15.648 00:15:15.648 Active Namespaces 00:15:15.648 ================= 00:15:15.648 Namespace ID:1 00:15:15.648 Error Recovery Timeout: Unlimited 00:15:15.648 Command Set Identifier: NVM (00h) 00:15:15.648 Deallocate: Supported 00:15:15.648 Deallocated/Unwritten Error: Not Supported 00:15:15.648 Deallocated Read Value: Unknown 00:15:15.648 Deallocate in Write Zeroes: Not Supported 00:15:15.648 Deallocated Guard Field: 0xFFFF 00:15:15.648 Flush: Supported 00:15:15.648 Reservation: Supported 00:15:15.648 Namespace Sharing Capabilities: Multiple Controllers 00:15:15.648 Size (in LBAs): 131072 (0GiB) 00:15:15.648 Capacity (in LBAs): 131072 (0GiB) 00:15:15.648 Utilization (in LBAs): 131072 (0GiB) 00:15:15.648 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:15.648 EUI64: ABCDEF0123456789 00:15:15.648 UUID: cf8169e9-9aaa-414e-bed4-2d988852fb2d 00:15:15.648 Thin Provisioning: Not Supported 00:15:15.648 Per-NS Atomic Units: Yes 00:15:15.648 Atomic Boundary Size (Normal): 0 00:15:15.648 Atomic Boundary Size (PFail): 0 00:15:15.648 Atomic Boundary Offset: 0 00:15:15.648 Maximum Single Source Range Length: 65535 00:15:15.648 Maximum Copy Length: 65535 00:15:15.648 Maximum Source Range Count: 1 00:15:15.648 NGUID/EUI64 Never Reused: No 00:15:15.648 Namespace Write Protected: No 00:15:15.648 Number of LBA Formats: 1 00:15:15.648 Current LBA Format: LBA Format #00 00:15:15.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:15.648 00:15:15.648 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:15.648 22:45:30 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.648 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.648 22:45:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.648 rmmod nvme_tcp 00:15:15.648 rmmod nvme_fabrics 00:15:15.648 rmmod nvme_keyring 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74862 ']' 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74862 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74862 ']' 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74862 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74862 00:15:15.648 killing process with pid 74862 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74862' 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74862 00:15:15.648 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74862 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:15.907 00:15:15.907 real 0m2.503s 00:15:15.907 user 0m7.023s 00:15:15.907 sys 0m0.633s 00:15:15.907 ************************************ 00:15:15.907 END TEST nvmf_identify 00:15:15.907 ************************************ 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.907 22:45:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:15.907 22:45:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.907 22:45:31 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:15.907 22:45:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.907 22:45:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.907 22:45:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:15.907 ************************************ 00:15:15.907 START TEST nvmf_perf 00:15:15.907 ************************************ 00:15:15.907 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:16.165 * Looking for test storage... 00:15:16.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:16.165 22:45:31 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.166 Cannot find device "nvmf_tgt_br" 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.166 Cannot find device "nvmf_tgt_br2" 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.166 Cannot find device "nvmf_tgt_br" 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.166 Cannot find device "nvmf_tgt_br2" 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.166 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:16.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:16.425 00:15:16.425 --- 10.0.0.2 ping statistics --- 00:15:16.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.425 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:16.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:16.425 00:15:16.425 --- 10.0.0.3 ping statistics --- 00:15:16.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.425 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:16.425 00:15:16.425 --- 10.0.0.1 ping statistics --- 00:15:16.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.425 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:16.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75070 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75070 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75070 ']' 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.425 22:45:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:16.425 [2024-07-15 22:45:31.960732] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:16.425 [2024-07-15 22:45:31.960824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.684 [2024-07-15 22:45:32.097245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.684 [2024-07-15 22:45:32.230414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.684 [2024-07-15 22:45:32.230777] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.684 [2024-07-15 22:45:32.230812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.684 [2024-07-15 22:45:32.230828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.684 [2024-07-15 22:45:32.230838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.684 [2024-07-15 22:45:32.230938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.684 [2024-07-15 22:45:32.231094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.684 [2024-07-15 22:45:32.231410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.684 [2024-07-15 22:45:32.231438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.943 [2024-07-15 22:45:32.289148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:17.509 22:45:32 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:18.076 22:45:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:18.076 22:45:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:18.383 22:45:33 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:18.384 22:45:33 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:18.642 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:18.642 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:18.642 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:18.642 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:18.642 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:18.901 [2024-07-15 22:45:34.274425] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.901 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.160 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:19.160 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:19.418 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:19.418 22:45:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:19.675 22:45:35 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.932 [2024-07-15 22:45:35.295761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.932 22:45:35 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.190 22:45:35 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:20.190 22:45:35 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:20.190 22:45:35 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:20.190 22:45:35 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:21.565 Initializing NVMe Controllers 00:15:21.565 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:21.565 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:21.565 Initialization complete. Launching workers. 00:15:21.565 ======================================================== 00:15:21.565 Latency(us) 00:15:21.565 Device Information : IOPS MiB/s Average min max 00:15:21.565 PCIE (0000:00:10.0) NSID 1 from core 0: 24668.67 96.36 1302.21 237.40 5224.20 00:15:21.565 ======================================================== 00:15:21.565 Total : 24668.67 96.36 1302.21 237.40 5224.20 00:15:21.565 00:15:21.565 22:45:36 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:22.549 Initializing NVMe Controllers 00:15:22.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:22.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:22.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:22.549 Initialization complete. Launching workers. 00:15:22.549 ======================================================== 00:15:22.549 Latency(us) 00:15:22.549 Device Information : IOPS MiB/s Average min max 00:15:22.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3493.00 13.64 283.67 104.96 7208.54 00:15:22.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8186.26 6001.75 12044.37 00:15:22.549 ======================================================== 00:15:22.549 Total : 3616.00 14.12 552.48 104.96 12044.37 00:15:22.549 00:15:22.549 22:45:38 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:23.923 Initializing NVMe Controllers 00:15:23.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:23.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:23.924 Initialization complete. Launching workers. 00:15:23.924 ======================================================== 00:15:23.924 Latency(us) 00:15:23.924 Device Information : IOPS MiB/s Average min max 00:15:23.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8421.31 32.90 3804.31 685.50 8115.01 00:15:23.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.19 15.75 7989.11 6624.07 9285.00 00:15:23.924 ======================================================== 00:15:23.924 Total : 12452.50 48.64 5159.04 685.50 9285.00 00:15:23.924 00:15:23.924 22:45:39 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:23.924 22:45:39 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:26.543 Initializing NVMe Controllers 00:15:26.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.543 Controller IO queue size 128, less than required. 00:15:26.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.543 Controller IO queue size 128, less than required. 00:15:26.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:26.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:26.543 Initialization complete. Launching workers. 00:15:26.543 ======================================================== 00:15:26.543 Latency(us) 00:15:26.543 Device Information : IOPS MiB/s Average min max 00:15:26.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1526.25 381.56 85553.51 41199.32 144334.59 00:15:26.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 677.33 169.33 198066.63 48970.77 285173.51 00:15:26.543 ======================================================== 00:15:26.543 Total : 2203.58 550.90 120137.67 41199.32 285173.51 00:15:26.543 00:15:26.543 22:45:42 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:26.817 Initializing NVMe Controllers 00:15:26.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.817 Controller IO queue size 128, less than required. 00:15:26.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.817 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:26.817 Controller IO queue size 128, less than required. 00:15:26.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:26.817 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:26.817 WARNING: Some requested NVMe devices were skipped 00:15:26.817 No valid NVMe controllers or AIO or URING devices found 00:15:26.817 22:45:42 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:29.346 Initializing NVMe Controllers 00:15:29.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.346 Controller IO queue size 128, less than required. 00:15:29.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.346 Controller IO queue size 128, less than required. 00:15:29.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:29.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:29.346 Initialization complete. Launching workers. 00:15:29.346 00:15:29.346 ==================== 00:15:29.346 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:29.346 TCP transport: 00:15:29.346 polls: 8757 00:15:29.346 idle_polls: 5274 00:15:29.346 sock_completions: 3483 00:15:29.346 nvme_completions: 6517 00:15:29.346 submitted_requests: 9868 00:15:29.346 queued_requests: 1 00:15:29.346 00:15:29.346 ==================== 00:15:29.346 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:29.346 TCP transport: 00:15:29.346 polls: 11149 00:15:29.346 idle_polls: 7088 00:15:29.346 sock_completions: 4061 00:15:29.346 nvme_completions: 6775 00:15:29.346 submitted_requests: 10068 00:15:29.346 queued_requests: 1 00:15:29.346 ======================================================== 00:15:29.346 Latency(us) 00:15:29.346 Device Information : IOPS MiB/s Average min max 00:15:29.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1627.17 406.79 80111.75 42880.85 151512.79 00:15:29.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1691.60 422.90 75580.89 31354.92 120278.67 00:15:29.346 ======================================================== 00:15:29.346 Total : 3318.77 829.69 77802.34 31354.92 151512.79 00:15:29.346 00:15:29.346 22:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:29.346 22:45:44 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.604 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:29.605 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.605 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.605 rmmod nvme_tcp 00:15:29.605 rmmod nvme_fabrics 00:15:29.605 rmmod nvme_keyring 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75070 ']' 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75070 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75070 ']' 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75070 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75070 00:15:29.863 killing process with pid 75070 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75070' 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75070 00:15:29.863 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75070 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:30.428 ************************************ 00:15:30.428 END TEST nvmf_perf 00:15:30.428 ************************************ 00:15:30.428 00:15:30.428 real 0m14.541s 00:15:30.428 user 0m53.602s 00:15:30.428 sys 0m4.064s 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.428 22:45:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:30.686 22:45:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.686 22:45:46 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:30.686 22:45:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:30.686 22:45:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.686 22:45:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.686 ************************************ 00:15:30.686 START TEST nvmf_fio_host 00:15:30.686 ************************************ 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:30.686 * Looking for test storage... 00:15:30.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.686 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:30.687 Cannot find device "nvmf_tgt_br" 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.687 Cannot find device "nvmf_tgt_br2" 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:30.687 Cannot find device "nvmf_tgt_br" 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:30.687 Cannot find device "nvmf_tgt_br2" 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:30.687 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.945 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:30.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:30.946 00:15:30.946 --- 10.0.0.2 ping statistics --- 00:15:30.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.946 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:30.946 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.946 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:30.946 00:15:30.946 --- 10.0.0.3 ping statistics --- 00:15:30.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.946 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:30.946 00:15:30.946 --- 10.0.0.1 ping statistics --- 00:15:30.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.946 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75478 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75478 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75478 ']' 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.946 22:45:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.204 [2024-07-15 22:45:46.544948] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:31.204 [2024-07-15 22:45:46.545316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.204 [2024-07-15 22:45:46.686652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.462 [2024-07-15 22:45:46.799362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.462 [2024-07-15 22:45:46.799684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.462 [2024-07-15 22:45:46.799825] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.462 [2024-07-15 22:45:46.799951] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.462 [2024-07-15 22:45:46.799988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.462 [2024-07-15 22:45:46.800219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.462 [2024-07-15 22:45:46.800481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.462 [2024-07-15 22:45:46.800788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.462 [2024-07-15 22:45:46.800798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.462 [2024-07-15 22:45:46.856577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:32.026 22:45:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.026 22:45:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:32.026 22:45:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.284 [2024-07-15 22:45:47.717392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.284 22:45:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:32.284 22:45:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.284 22:45:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.284 22:45:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:32.543 Malloc1 00:15:32.543 22:45:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.801 22:45:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.079 22:45:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.338 [2024-07-15 22:45:48.800551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.338 22:45:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:33.597 22:45:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:33.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:33.855 fio-3.35 00:15:33.855 Starting 1 thread 00:15:36.383 00:15:36.383 test: (groupid=0, jobs=1): err= 0: pid=75561: Mon Jul 15 22:45:51 2024 00:15:36.383 read: IOPS=8993, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2006msec) 00:15:36.383 slat (usec): min=2, max=365, avg= 2.42, stdev= 3.41 00:15:36.383 clat (usec): min=2618, max=13274, avg=7384.36, stdev=533.71 00:15:36.383 lat (usec): min=2660, max=13276, avg=7386.78, stdev=533.39 00:15:36.383 clat percentiles (usec): 00:15:36.383 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 6980], 00:15:36.383 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7439], 00:15:36.383 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:15:36.383 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[11600], 99.95th=[12911], 00:15:36.383 | 99.99th=[13173] 00:15:36.383 bw ( KiB/s): min=34872, max=36440, per=99.95%, avg=35954.00, stdev=732.90, samples=4 00:15:36.383 iops : min= 8718, max= 9110, avg=8988.50, stdev=183.23, samples=4 00:15:36.383 write: IOPS=9014, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2006msec); 0 zone resets 00:15:36.383 slat (usec): min=2, max=256, avg= 2.54, stdev= 2.22 00:15:36.383 clat (usec): min=2469, max=13111, avg=6736.73, stdev=492.23 00:15:36.383 lat (usec): min=2483, max=13113, avg=6739.28, stdev=492.04 00:15:36.383 clat percentiles (usec): 00:15:36.383 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6390], 00:15:36.383 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6783], 00:15:36.383 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7373], 00:15:36.383 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[11600], 99.95th=[12387], 00:15:36.383 | 99.99th=[13042] 00:15:36.383 bw ( KiB/s): min=35648, max=36400, per=99.95%, avg=36040.00, stdev=412.14, samples=4 00:15:36.383 iops : min= 8912, max= 9100, avg=9010.00, stdev=103.03, samples=4 00:15:36.383 lat (msec) : 4=0.10%, 10=99.67%, 20=0.23% 00:15:36.383 cpu : usr=71.27%, sys=21.40%, ctx=7, majf=0, minf=6 00:15:36.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:36.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:36.384 issued rwts: total=18040,18084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:36.384 00:15:36.384 Run status group 0 (all jobs): 00:15:36.384 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2006-2006msec 00:15:36.384 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.1MB), run=2006-2006msec 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:36.384 22:45:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:36.384 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:36.384 fio-3.35 00:15:36.384 Starting 1 thread 00:15:38.911 00:15:38.911 test: (groupid=0, jobs=1): err= 0: pid=75604: Mon Jul 15 22:45:54 2024 00:15:38.911 read: IOPS=8288, BW=130MiB/s (136MB/s)(260MiB/2007msec) 00:15:38.911 slat (usec): min=3, max=142, avg= 3.90, stdev= 1.95 00:15:38.911 clat (usec): min=1708, max=17761, avg=8616.26, stdev=2763.19 00:15:38.911 lat (usec): min=1712, max=17764, avg=8620.16, stdev=2763.20 00:15:38.911 clat percentiles (usec): 00:15:38.911 | 1.00th=[ 4080], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6063], 00:15:38.911 | 30.00th=[ 6915], 40.00th=[ 7570], 50.00th=[ 8291], 60.00th=[ 8979], 00:15:38.911 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12256], 95.00th=[13698], 00:15:38.911 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17171], 99.95th=[17433], 00:15:38.911 | 99.99th=[17695] 00:15:38.911 bw ( KiB/s): min=58048, max=72928, per=50.37%, avg=66792.00, stdev=6694.98, samples=4 00:15:38.911 iops : min= 3628, max= 4558, avg=4174.50, stdev=418.44, samples=4 00:15:38.911 write: IOPS=4842, BW=75.7MiB/s (79.3MB/s)(137MiB/1812msec); 0 zone resets 00:15:38.911 slat (usec): min=34, max=391, avg=39.09, stdev= 7.92 00:15:38.911 clat (usec): min=5015, max=20811, avg=12049.25, stdev=2187.45 00:15:38.911 lat (usec): min=5053, max=20848, avg=12088.34, stdev=2188.20 00:15:38.911 clat percentiles (usec): 00:15:38.911 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10290], 00:15:38.911 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:15:38.911 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15008], 95.00th=[16057], 00:15:38.911 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20055], 99.95th=[20317], 00:15:38.911 | 99.99th=[20841] 00:15:38.911 bw ( KiB/s): min=61632, max=75584, per=89.90%, avg=69656.00, stdev=6335.10, samples=4 00:15:38.911 iops : min= 3852, max= 4724, avg=4353.50, stdev=395.94, samples=4 00:15:38.911 lat (msec) : 2=0.02%, 4=0.56%, 10=51.18%, 20=48.19%, 50=0.04% 00:15:38.911 cpu : usr=81.75%, sys=13.61%, ctx=13, majf=0, minf=21 00:15:38.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:38.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.911 issued rwts: total=16635,8775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.911 00:15:38.911 Run status group 0 (all jobs): 00:15:38.911 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2007-2007msec 00:15:38.911 WRITE: bw=75.7MiB/s (79.3MB/s), 75.7MiB/s-75.7MiB/s (79.3MB/s-79.3MB/s), io=137MiB (144MB), run=1812-1812msec 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.911 rmmod nvme_tcp 00:15:38.911 rmmod nvme_fabrics 00:15:38.911 rmmod nvme_keyring 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75478 ']' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75478 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75478 ']' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75478 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75478 00:15:38.911 killing process with pid 75478 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75478' 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75478 00:15:38.911 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75478 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.170 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.428 22:45:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:39.428 ************************************ 00:15:39.428 END TEST nvmf_fio_host 00:15:39.428 ************************************ 00:15:39.428 00:15:39.428 real 0m8.720s 00:15:39.428 user 0m35.618s 00:15:39.428 sys 0m2.403s 00:15:39.428 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.428 22:45:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.428 22:45:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:39.428 22:45:54 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:39.428 22:45:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.428 22:45:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.428 22:45:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.428 ************************************ 00:15:39.428 START TEST nvmf_failover 00:15:39.428 ************************************ 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:39.428 * Looking for test storage... 00:15:39.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.428 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.429 Cannot find device "nvmf_tgt_br" 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.429 Cannot find device "nvmf_tgt_br2" 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.429 Cannot find device "nvmf_tgt_br" 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.429 Cannot find device "nvmf_tgt_br2" 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:39.429 22:45:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:39.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:15:39.711 00:15:39.711 --- 10.0.0.2 ping statistics --- 00:15:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.711 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:39.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:39.711 00:15:39.711 --- 10.0.0.3 ping statistics --- 00:15:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.711 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:39.711 00:15:39.711 --- 10.0.0.1 ping statistics --- 00:15:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.711 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.711 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75820 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75820 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75820 ']' 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.970 22:45:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.970 [2024-07-15 22:45:55.343726] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:15:39.970 [2024-07-15 22:45:55.343816] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.970 [2024-07-15 22:45:55.480727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.228 [2024-07-15 22:45:55.602226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.229 [2024-07-15 22:45:55.602295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.229 [2024-07-15 22:45:55.602306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.229 [2024-07-15 22:45:55.602315] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.229 [2024-07-15 22:45:55.602322] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.229 [2024-07-15 22:45:55.602483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.229 [2024-07-15 22:45:55.603071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.229 [2024-07-15 22:45:55.603083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.229 [2024-07-15 22:45:55.656362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.163 [2024-07-15 22:45:56.631765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.163 22:45:56 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:41.421 Malloc0 00:15:41.421 22:45:56 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:41.678 22:45:57 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.936 22:45:57 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.194 [2024-07-15 22:45:57.627934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.194 22:45:57 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:42.453 [2024-07-15 22:45:57.860141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:42.453 22:45:57 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:42.711 [2024-07-15 22:45:58.152432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75872 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75872 /var/tmp/bdevperf.sock 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75872 ']' 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.711 22:45:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:44.086 22:45:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.086 22:45:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:44.086 22:45:59 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.086 NVMe0n1 00:15:44.086 22:45:59 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.653 00:15:44.653 22:45:59 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75901 00:15:44.653 22:45:59 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:44.653 22:45:59 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.588 22:46:00 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.846 22:46:01 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:49.185 22:46:04 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.185 00:15:49.185 22:46:04 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:49.444 22:46:04 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:52.727 22:46:07 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.727 [2024-07-15 22:46:08.198203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.727 22:46:08 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:53.680 22:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:53.938 [2024-07-15 22:46:09.478045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb580 is same with the state(5) to be set 00:15:53.938 [2024-07-15 22:46:09.478143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb580 is same with the state(5) to be set 00:15:53.938 [2024-07-15 22:46:09.478156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb580 is same with the state(5) to be set 00:15:53.938 [2024-07-15 22:46:09.478166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb580 is same with the state(5) to be set 00:15:53.938 [2024-07-15 22:46:09.478176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb580 is same with the state(5) to be set 00:15:53.938 [2024-07-15 22:46:09.478186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb580 is same with the state(5) to be set 00:15:53.938 22:46:09 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75901 00:16:00.495 0 00:16:00.495 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75872 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75872 ']' 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75872 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75872 00:16:00.496 killing process with pid 75872 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75872' 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75872 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75872 00:16:00.496 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:00.496 [2024-07-15 22:45:58.214563] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:16:00.496 [2024-07-15 22:45:58.214704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75872 ] 00:16:00.496 [2024-07-15 22:45:58.350173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.496 [2024-07-15 22:45:58.467861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.496 [2024-07-15 22:45:58.523700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:00.496 Running I/O for 15 seconds... 00:16:00.496 [2024-07-15 22:46:01.249019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.249974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.249988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.250017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.496 [2024-07-15 22:46:01.250046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.496 [2024-07-15 22:46:01.250075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.496 [2024-07-15 22:46:01.250104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.496 [2024-07-15 22:46:01.250134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.496 [2024-07-15 22:46:01.250163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.496 [2024-07-15 22:46:01.250192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.496 [2024-07-15 22:46:01.250215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.497 [2024-07-15 22:46:01.250516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.497 [2024-07-15 22:46:01.250544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.497 [2024-07-15 22:46:01.250799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.250981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.250996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.497 [2024-07-15 22:46:01.251486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.497 [2024-07-15 22:46:01.251499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.251983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.251997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.498 [2024-07-15 22:46:01.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.498 [2024-07-15 22:46:01.252808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.252823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:01.252837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.252851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:01.252865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.252880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:01.252893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.252915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:01.252930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.252950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:01.252968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.252984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:01.252997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.253012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x753380 is same with the state(5) to be set 00:16:00.499 [2024-07-15 22:46:01.253028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.499 [2024-07-15 22:46:01.253039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.499 [2024-07-15 22:46:01.253050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67472 len:8 PRP1 0x0 PRP2 0x0 00:16:00.499 [2024-07-15 22:46:01.253063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.253122] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x753380 was disconnected and freed. reset controller. 00:16:00.499 [2024-07-15 22:46:01.253140] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:00.499 [2024-07-15 22:46:01.253197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.499 [2024-07-15 22:46:01.253219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.253234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.499 [2024-07-15 22:46:01.253247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.253262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.499 [2024-07-15 22:46:01.253275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.253289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.499 [2024-07-15 22:46:01.253302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:01.253315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.499 [2024-07-15 22:46:01.257339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.499 [2024-07-15 22:46:01.257387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7030f0 (9): Bad file descriptor 00:16:00.499 [2024-07-15 22:46:01.292153] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.499 [2024-07-15 22:46:04.892813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.892894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.892951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.892970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.892986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.499 [2024-07-15 22:46:04.893639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.499 [2024-07-15 22:46:04.893751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.499 [2024-07-15 22:46:04.893766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.893794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.893852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.893892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.893921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.893950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.893979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.893994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.500 [2024-07-15 22:46:04.894623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.500 [2024-07-15 22:46:04.894811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.500 [2024-07-15 22:46:04.894825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.894841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.894854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.894870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.894891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.894906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.894921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.894936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.894950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.894965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.894979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.894994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.895580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.895978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.895994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.896008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.896029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.896044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.896060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.501 [2024-07-15 22:46:04.896073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.896088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.501 [2024-07-15 22:46:04.896102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.501 [2024-07-15 22:46:04.896117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:04.896543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x754030 is same with the state(5) to be set 00:16:00.502 [2024-07-15 22:46:04.896587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48968 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49376 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49384 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49392 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49400 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49408 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.896965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.896978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.502 [2024-07-15 22:46:04.896988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.502 [2024-07-15 22:46:04.896998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49416 len:8 PRP1 0x0 PRP2 0x0 00:16:00.502 [2024-07-15 22:46:04.897010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.897082] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x754030 was disconnected and freed. reset controller. 00:16:00.502 [2024-07-15 22:46:04.897101] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:00.502 [2024-07-15 22:46:04.897161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.502 [2024-07-15 22:46:04.897182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.897197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.502 [2024-07-15 22:46:04.897210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.897224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.502 [2024-07-15 22:46:04.897236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.897250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.502 [2024-07-15 22:46:04.897273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:04.897293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.502 [2024-07-15 22:46:04.897344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7030f0 (9): Bad file descriptor 00:16:00.502 [2024-07-15 22:46:04.901201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.502 [2024-07-15 22:46:04.932315] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.502 [2024-07-15 22:46:09.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:09.478769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:09.478801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:09.478829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:09.478859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:09.478889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.502 [2024-07-15 22:46:09.478918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.502 [2024-07-15 22:46:09.478932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.478947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.478961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.478976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.479747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.479937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.479962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.503 [2024-07-15 22:46:09.480345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.480374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.480405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.503 [2024-07-15 22:46:09.480420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.503 [2024-07-15 22:46:09.480437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.480477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.480506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.480534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.480575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.480606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.480980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.480993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.504 [2024-07-15 22:46:09.481437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.504 [2024-07-15 22:46:09.481551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.504 [2024-07-15 22:46:09.481576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.481972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.481988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.505 [2024-07-15 22:46:09.482008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.505 [2024-07-15 22:46:09.482214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x786670 is same with the state(5) to be set 00:16:00.505 [2024-07-15 22:46:09.482246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92248 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92264 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92272 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92280 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.505 [2024-07-15 22:46:09.482825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92288 len:8 PRP1 0x0 PRP2 0x0 00:16:00.505 [2024-07-15 22:46:09.482838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.505 [2024-07-15 22:46:09.482851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.505 [2024-07-15 22:46:09.482861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.482871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92296 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.482884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.482897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.482907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.482917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92304 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.482930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.482943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.482953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.482963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92312 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.482976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.482989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.482999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.483009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92320 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.483029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.483056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.483067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92328 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.483080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.483103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.483113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92336 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.483125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.483155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.483166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92344 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.483178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.506 [2024-07-15 22:46:09.483202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.506 [2024-07-15 22:46:09.483212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92352 len:8 PRP1 0x0 PRP2 0x0 00:16:00.506 [2024-07-15 22:46:09.483225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483285] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x786670 was disconnected and freed. reset controller. 00:16:00.506 [2024-07-15 22:46:09.483304] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:00.506 [2024-07-15 22:46:09.483366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.506 [2024-07-15 22:46:09.483388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.506 [2024-07-15 22:46:09.483416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.506 [2024-07-15 22:46:09.483444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.506 [2024-07-15 22:46:09.483479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.506 [2024-07-15 22:46:09.483493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.506 [2024-07-15 22:46:09.487311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.506 [2024-07-15 22:46:09.487351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7030f0 (9): Bad file descriptor 00:16:00.506 [2024-07-15 22:46:09.519794] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.506 00:16:00.506 Latency(us) 00:16:00.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.506 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.506 Verification LBA range: start 0x0 length 0x4000 00:16:00.506 NVMe0n1 : 15.01 8032.38 31.38 202.86 0.00 15510.57 666.53 19065.02 00:16:00.506 =================================================================================================================== 00:16:00.506 Total : 8032.38 31.38 202.86 0.00 15510.57 666.53 19065.02 00:16:00.506 Received shutdown signal, test time was about 15.000000 seconds 00:16:00.506 00:16:00.506 Latency(us) 00:16:00.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.506 =================================================================================================================== 00:16:00.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76074 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76074 /var/tmp/bdevperf.sock 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76074 ']' 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.506 22:46:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 22:46:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.073 22:46:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:01.073 22:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:01.073 [2024-07-15 22:46:16.591281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:01.073 22:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:01.332 [2024-07-15 22:46:16.851638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:01.332 22:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.899 NVMe0n1 00:16:01.899 22:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.158 00:16:02.158 22:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.429 00:16:02.429 22:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:02.429 22:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:02.698 22:46:18 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.956 22:46:18 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:06.240 22:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.240 22:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:06.240 22:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:06.240 22:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76157 00:16:06.240 22:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76157 00:16:07.615 0 00:16:07.615 22:46:22 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:07.615 [2024-07-15 22:46:15.405325] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:16:07.615 [2024-07-15 22:46:15.405445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76074 ] 00:16:07.615 [2024-07-15 22:46:15.540986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.615 [2024-07-15 22:46:15.658419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.615 [2024-07-15 22:46:15.712406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:07.615 [2024-07-15 22:46:18.469891] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:07.615 [2024-07-15 22:46:18.470039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.615 [2024-07-15 22:46:18.470065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.615 [2024-07-15 22:46:18.470084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.615 [2024-07-15 22:46:18.470114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.615 [2024-07-15 22:46:18.470129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.615 [2024-07-15 22:46:18.470142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.615 [2024-07-15 22:46:18.470157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.615 [2024-07-15 22:46:18.470171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.615 [2024-07-15 22:46:18.470185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:07.615 [2024-07-15 22:46:18.470244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:07.615 [2024-07-15 22:46:18.470278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ab0f0 (9): Bad file descriptor 00:16:07.615 [2024-07-15 22:46:18.477065] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:07.615 Running I/O for 1 seconds... 00:16:07.615 00:16:07.615 Latency(us) 00:16:07.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.615 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.615 Verification LBA range: start 0x0 length 0x4000 00:16:07.615 NVMe0n1 : 1.01 6523.64 25.48 0.00 0.00 19535.88 2412.92 15966.95 00:16:07.615 =================================================================================================================== 00:16:07.615 Total : 6523.64 25.48 0.00 0.00 19535.88 2412.92 15966.95 00:16:07.615 22:46:22 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.615 22:46:22 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:07.615 22:46:23 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.872 22:46:23 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.872 22:46:23 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:08.437 22:46:23 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:08.437 22:46:23 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:11.719 22:46:26 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.719 22:46:26 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76074 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76074 ']' 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76074 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76074 00:16:11.719 killing process with pid 76074 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76074' 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76074 00:16:11.719 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76074 00:16:12.285 22:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:12.285 22:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.543 rmmod nvme_tcp 00:16:12.543 rmmod nvme_fabrics 00:16:12.543 rmmod nvme_keyring 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75820 ']' 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75820 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75820 ']' 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75820 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75820 00:16:12.543 killing process with pid 75820 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75820' 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75820 00:16:12.543 22:46:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75820 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:12.801 ************************************ 00:16:12.801 END TEST nvmf_failover 00:16:12.801 ************************************ 00:16:12.801 00:16:12.801 real 0m33.480s 00:16:12.801 user 2m9.769s 00:16:12.801 sys 0m5.707s 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.801 22:46:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:12.801 22:46:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.801 22:46:28 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.801 22:46:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.801 22:46:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.801 22:46:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.801 ************************************ 00:16:12.801 START TEST nvmf_host_discovery 00:16:12.801 ************************************ 00:16:12.801 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:13.059 * Looking for test storage... 00:16:13.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.059 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:13.060 Cannot find device "nvmf_tgt_br" 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.060 Cannot find device "nvmf_tgt_br2" 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:13.060 Cannot find device "nvmf_tgt_br" 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:13.060 Cannot find device "nvmf_tgt_br2" 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.060 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:13.318 00:16:13.318 --- 10.0.0.2 ping statistics --- 00:16:13.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.318 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:13.318 00:16:13.318 --- 10.0.0.3 ping statistics --- 00:16:13.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.318 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:13.318 00:16:13.318 --- 10.0.0.1 ping statistics --- 00:16:13.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.318 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.318 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76427 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76427 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76427 ']' 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.319 22:46:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.319 [2024-07-15 22:46:28.828428] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:16:13.319 [2024-07-15 22:46:28.828533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.576 [2024-07-15 22:46:28.964404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.576 [2024-07-15 22:46:29.081033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.576 [2024-07-15 22:46:29.081083] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.576 [2024-07-15 22:46:29.081095] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.576 [2024-07-15 22:46:29.081104] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.576 [2024-07-15 22:46:29.081111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.576 [2024-07-15 22:46:29.081141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.576 [2024-07-15 22:46:29.134011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 [2024-07-15 22:46:29.945298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 [2024-07-15 22:46:29.953423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 null0 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 null1 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76459 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76459 /tmp/host.sock 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76459 ']' 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.511 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.511 22:46:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 [2024-07-15 22:46:30.039685] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:16:14.511 [2024-07-15 22:46:30.039787] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76459 ] 00:16:14.768 [2024-07-15 22:46:30.178392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.768 [2024-07-15 22:46:30.294099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.026 [2024-07-15 22:46:30.347199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.593 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:15.852 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 [2024-07-15 22:46:31.497894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:16.110 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.111 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:16.369 22:46:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:16.640 [2024-07-15 22:46:32.128504] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:16.640 [2024-07-15 22:46:32.128547] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:16.640 [2024-07-15 22:46:32.128577] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:16.640 [2024-07-15 22:46:32.134549] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:16.640 [2024-07-15 22:46:32.191849] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:16.640 [2024-07-15 22:46:32.191902] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.207 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:17.465 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:17.466 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.466 22:46:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:17.466 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.466 22:46:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.466 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.724 [2024-07-15 22:46:33.119500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.724 [2024-07-15 22:46:33.120206] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:17.724 [2024-07-15 22:46:33.120243] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.724 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.725 [2024-07-15 22:46:33.126208] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.725 [2024-07-15 22:46:33.184548] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:17.725 [2024-07-15 22:46:33.184595] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:17.725 [2024-07-15 22:46:33.184603] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.725 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 [2024-07-15 22:46:33.373457] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:17.984 [2024-07-15 22:46:33.373499] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.984 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:17.984 [2024-07-15 22:46:33.378926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.984 [2024-07-15 22:46:33.378971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.984 [2024-07-15 22:46:33.378985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.984 [2024-07-15 22:46:33.378995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.984 [2024-07-15 22:46:33.379006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.984 [2024-07-15 22:46:33.379015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.984 [2024-07-15 22:46:33.379025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:17.984 [2024-07-15 22:46:33.379034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.984 [2024-07-15 22:46:33.379044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb500 is same with the state(5) to be set 00:16:17.984 [2024-07-15 22:46:33.379452] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:17.984 [2024-07-15 22:46:33.379481] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:17.985 [2024-07-15 22:46:33.379539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fb500 (9): Bad file descriptor 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:17.985 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.244 22:46:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 [2024-07-15 22:46:34.789472] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:19.618 [2024-07-15 22:46:34.789513] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:19.618 [2024-07-15 22:46:34.789545] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:19.618 [2024-07-15 22:46:34.795503] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:19.618 [2024-07-15 22:46:34.856192] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:19.618 [2024-07-15 22:46:34.856251] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 request: 00:16:19.618 { 00:16:19.618 "name": "nvme", 00:16:19.618 "trtype": "tcp", 00:16:19.618 "traddr": "10.0.0.2", 00:16:19.618 "adrfam": "ipv4", 00:16:19.618 "trsvcid": "8009", 00:16:19.618 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:19.618 "wait_for_attach": true, 00:16:19.618 "method": "bdev_nvme_start_discovery", 00:16:19.618 "req_id": 1 00:16:19.618 } 00:16:19.618 Got JSON-RPC error response 00:16:19.618 response: 00:16:19.618 { 00:16:19.618 "code": -17, 00:16:19.618 "message": "File exists" 00:16:19.618 } 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.618 22:46:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 request: 00:16:19.618 { 00:16:19.618 "name": "nvme_second", 00:16:19.618 "trtype": "tcp", 00:16:19.618 "traddr": "10.0.0.2", 00:16:19.618 "adrfam": "ipv4", 00:16:19.618 "trsvcid": "8009", 00:16:19.618 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:19.618 "wait_for_attach": true, 00:16:19.618 "method": "bdev_nvme_start_discovery", 00:16:19.618 "req_id": 1 00:16:19.618 } 00:16:19.618 Got JSON-RPC error response 00:16:19.618 response: 00:16:19.618 { 00:16:19.618 "code": -17, 00:16:19.618 "message": "File exists" 00:16:19.618 } 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:19.618 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.619 22:46:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.570 [2024-07-15 22:46:36.120902] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:20.570 [2024-07-15 22:46:36.120956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a14350 with addr=10.0.0.2, port=8010 00:16:20.570 [2024-07-15 22:46:36.120981] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:20.570 [2024-07-15 22:46:36.120992] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:20.570 [2024-07-15 22:46:36.121003] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:21.946 [2024-07-15 22:46:37.120941] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.946 [2024-07-15 22:46:37.121010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a059c0 with addr=10.0.0.2, port=8010 00:16:21.946 [2024-07-15 22:46:37.121036] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:21.946 [2024-07-15 22:46:37.121047] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:21.946 [2024-07-15 22:46:37.121057] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:22.881 [2024-07-15 22:46:38.120767] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:22.881 request: 00:16:22.881 { 00:16:22.881 "name": "nvme_second", 00:16:22.881 "trtype": "tcp", 00:16:22.881 "traddr": "10.0.0.2", 00:16:22.881 "adrfam": "ipv4", 00:16:22.881 "trsvcid": "8010", 00:16:22.881 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:22.881 "wait_for_attach": false, 00:16:22.881 "attach_timeout_ms": 3000, 00:16:22.881 "method": "bdev_nvme_start_discovery", 00:16:22.881 "req_id": 1 00:16:22.881 } 00:16:22.881 Got JSON-RPC error response 00:16:22.881 response: 00:16:22.881 { 00:16:22.881 "code": -110, 00:16:22.881 "message": "Connection timed out" 00:16:22.881 } 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76459 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.881 rmmod nvme_tcp 00:16:22.881 rmmod nvme_fabrics 00:16:22.881 rmmod nvme_keyring 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76427 ']' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76427 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76427 ']' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76427 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76427 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:22.881 killing process with pid 76427 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76427' 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76427 00:16:22.881 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76427 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:23.140 00:16:23.140 real 0m10.298s 00:16:23.140 user 0m20.003s 00:16:23.140 sys 0m2.049s 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.140 ************************************ 00:16:23.140 END TEST nvmf_host_discovery 00:16:23.140 ************************************ 00:16:23.140 22:46:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.140 22:46:38 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:23.140 22:46:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.140 22:46:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.140 22:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.140 ************************************ 00:16:23.140 START TEST nvmf_host_multipath_status 00:16:23.140 ************************************ 00:16:23.140 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:23.400 * Looking for test storage... 00:16:23.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:23.400 Cannot find device "nvmf_tgt_br" 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.400 Cannot find device "nvmf_tgt_br2" 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:23.400 Cannot find device "nvmf_tgt_br" 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:23.400 Cannot find device "nvmf_tgt_br2" 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:23.400 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.401 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.401 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.401 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.659 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.659 22:46:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:23.659 00:16:23.659 --- 10.0.0.2 ping statistics --- 00:16:23.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.659 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:23.659 00:16:23.659 --- 10.0.0.3 ping statistics --- 00:16:23.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.659 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:23.659 00:16:23.659 --- 10.0.0.1 ping statistics --- 00:16:23.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.659 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76916 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76916 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76916 ']' 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.659 22:46:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:23.918 [2024-07-15 22:46:39.251981] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:16:23.918 [2024-07-15 22:46:39.252088] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.918 [2024-07-15 22:46:39.395411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.176 [2024-07-15 22:46:39.525963] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.176 [2024-07-15 22:46:39.526258] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.176 [2024-07-15 22:46:39.526442] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.176 [2024-07-15 22:46:39.526629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.176 [2024-07-15 22:46:39.526675] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.176 [2024-07-15 22:46:39.526903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.176 [2024-07-15 22:46:39.526917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.176 [2024-07-15 22:46:39.584015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76916 00:16:24.753 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.317 [2024-07-15 22:46:40.585667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.317 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:25.595 Malloc0 00:16:25.595 22:46:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:25.852 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:25.852 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.110 [2024-07-15 22:46:41.668721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.366 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:26.366 [2024-07-15 22:46:41.916909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76967 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76967 /var/tmp/bdevperf.sock 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76967 ']' 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.624 22:46:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:27.557 22:46:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.557 22:46:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:27.557 22:46:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:27.816 22:46:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:28.072 Nvme0n1 00:16:28.072 22:46:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:28.330 Nvme0n1 00:16:28.330 22:46:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:28.330 22:46:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.865 22:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:30.865 22:46:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:30.865 22:46:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:31.123 22:46:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:32.056 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:32.056 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:32.056 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.056 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:32.314 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.315 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:32.315 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.315 22:46:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:32.573 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.573 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.573 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.573 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:32.892 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.892 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:32.892 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:32.892 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.184 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.184 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:33.184 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.184 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.442 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.442 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:33.442 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.442 22:46:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.700 22:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.700 22:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:33.700 22:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:33.958 22:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:34.216 22:46:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:35.150 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:35.150 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:35.150 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.151 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:35.408 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.408 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:35.408 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.408 22:46:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:35.664 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.664 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:35.665 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.665 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:35.923 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.923 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:35.923 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.923 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.233 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.233 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.233 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.233 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.491 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.491 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.492 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.492 22:46:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:36.817 22:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.817 22:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:36.817 22:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:37.076 22:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:37.333 22:46:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:38.264 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:38.265 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:38.265 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.265 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.523 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.523 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:38.523 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.523 22:46:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:38.781 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:38.781 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.781 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.781 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.040 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.040 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.040 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.040 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.299 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.299 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.299 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.299 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.557 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.557 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.557 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.558 22:46:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.816 22:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.816 22:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:39.816 22:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:40.074 22:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:40.333 22:46:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:41.269 22:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:41.269 22:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:41.269 22:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.270 22:46:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:41.528 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.528 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:41.528 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.528 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.786 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:41.786 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.786 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:41.786 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.352 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.352 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.352 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.352 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.610 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.610 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.610 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.610 22:46:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:42.870 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:43.438 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:43.438 22:46:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:44.814 22:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:44.814 22:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:44.814 22:46:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.814 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:44.814 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:44.814 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:44.814 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.814 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:45.073 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.073 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:45.073 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.073 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:45.331 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.331 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:45.331 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.331 22:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:45.591 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.591 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:45.591 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.591 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:45.849 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.849 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:45.849 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.849 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:46.416 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.416 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:46.417 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:46.417 22:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:46.675 22:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.048 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:48.341 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.341 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:48.341 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.342 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:48.621 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.621 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:48.621 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.621 22:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:48.621 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.621 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:48.621 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.621 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:48.878 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.878 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:48.878 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:48.878 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.135 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.136 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:49.393 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:49.393 22:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:49.650 22:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:49.908 22:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:50.841 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:50.841 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:50.841 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.841 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:51.100 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.100 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:51.100 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.100 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:51.666 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.666 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:51.666 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:51.666 22:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.666 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.666 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:51.666 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:51.666 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.925 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.925 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:51.925 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:51.925 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.183 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.183 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:52.183 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:52.183 22:47:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:52.767 22:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:52.767 22:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:52.767 22:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:52.767 22:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:53.040 22:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:54.411 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.412 22:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:54.669 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.669 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:54.669 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.669 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:54.928 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.928 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:54.928 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.928 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:55.186 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.186 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:55.186 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.186 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:55.444 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.444 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:55.444 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.444 22:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:55.703 22:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.703 22:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:55.703 22:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:55.960 22:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:56.219 22:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:57.159 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:57.159 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:57.159 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.159 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:57.432 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.432 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:57.432 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:57.432 22:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.690 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.690 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:57.690 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.690 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:57.946 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.946 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:57.946 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.946 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:58.204 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.204 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:58.204 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.204 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:58.461 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.461 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:58.461 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:58.461 22:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.717 22:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.717 22:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:58.717 22:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:58.974 22:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:59.231 22:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:00.165 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:00.165 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:00.165 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.165 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:00.423 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.423 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:00.423 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.423 22:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:00.682 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.682 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:00.682 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.682 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:00.954 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.954 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:00.954 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:00.954 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.213 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.213 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:01.213 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:01.213 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.471 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.471 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:01.471 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.471 22:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76967 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76967 ']' 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76967 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76967 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:01.729 killing process with pid 76967 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76967' 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76967 00:17:01.729 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76967 00:17:01.994 Connection closed with partial response: 00:17:01.994 00:17:01.994 00:17:01.994 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76967 00:17:01.994 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:01.994 [2024-07-15 22:46:41.986306] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:17:01.994 [2024-07-15 22:46:41.986456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76967 ] 00:17:01.994 [2024-07-15 22:46:42.124740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.994 [2024-07-15 22:46:42.287681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.994 [2024-07-15 22:46:42.365842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:01.994 Running I/O for 90 seconds... 00:17:01.994 [2024-07-15 22:46:58.693567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.693706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.693784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.693810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.693836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.693855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.693879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.693897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.693921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.693938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.693964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.693981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.994 [2024-07-15 22:46:58.694798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:01.994 [2024-07-15 22:46:58.694924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.994 [2024-07-15 22:46:58.694941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.694964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.694981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.695021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.695079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.695123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.695165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.695976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.695992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.696031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.696071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.696110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.696149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.995 [2024-07-15 22:46:58.696188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:01.995 [2024-07-15 22:46:58.696759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.995 [2024-07-15 22:46:58.696775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.696798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.696814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.696845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.696878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.696901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.696918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.696942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.696968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.696994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.697866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.697972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.697995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.996 [2024-07-15 22:46:58.698394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.698434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.698473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.698524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.698578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.698630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:01.996 [2024-07-15 22:46:58.698665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.996 [2024-07-15 22:46:58.698684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.698964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.698988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.699005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.699028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.699062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.699883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:46:58.699913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.699950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.699987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:46:58.700396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:46:58.700419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.557707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.557789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.557830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.557889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.557970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.557994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.558010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.558069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.558111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.558154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.558197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.558260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.997 [2024-07-15 22:47:14.558305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.558349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.558393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.558451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.997 [2024-07-15 22:47:14.558493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:01.997 [2024-07-15 22:47:14.558517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.558550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.558592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.558648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.558690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.558771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.558824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.558865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.558905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.558946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.558971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.558989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.559801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.559981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.998 [2024-07-15 22:47:14.559999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.560024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.560058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.560083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.998 [2024-07-15 22:47:14.560100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:01.998 [2024-07-15 22:47:14.560125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.560695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.560922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.560939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.562573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.562614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.562675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.562716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.562756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.562861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.562878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.563816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.563846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.563895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.563915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.563941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.563959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.563985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.564029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.564090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.564134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.564178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.564265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.564358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.999 [2024-07-15 22:47:14.564401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.999 [2024-07-15 22:47:14.564445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:01.999 [2024-07-15 22:47:14.564469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.564574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.564834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.564914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.564957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.564982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.564998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.565839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.565963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.565981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.566022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.566112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.566156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.566200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.000 [2024-07-15 22:47:14.566242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.566301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.566337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.566356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.567782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.567825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.567856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.000 [2024-07-15 22:47:14.567875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:02.000 [2024-07-15 22:47:14.567900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.567919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.567944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.567961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.567984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.568957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.568980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.568997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.569116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.569160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.569204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.569247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.569290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.569335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.569385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.569460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.569485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.569502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.571439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.571482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.001 [2024-07-15 22:47:14.571725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.001 [2024-07-15 22:47:14.571806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:02.001 [2024-07-15 22:47:14.571829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.571846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.571870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.571886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.571909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.571926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.571950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.571968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.571991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.002 [2024-07-15 22:47:14.572607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:02.002 [2024-07-15 22:47:14.572901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.002 [2024-07-15 22:47:14.572919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.572942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.572959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.572983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.573000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.573057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.573140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.573200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.573252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.573298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.573342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.573368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.573387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.575501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.575550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.575605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.575815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.575975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.575993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.576136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.576270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.576463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.003 [2024-07-15 22:47:14.576649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:02.003 [2024-07-15 22:47:14.576673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.003 [2024-07-15 22:47:14.576690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.576732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.576772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.576813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.004 [2024-07-15 22:47:14.576854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.004 [2024-07-15 22:47:14.576894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.576935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.576959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.576977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.578712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.004 [2024-07-15 22:47:14.578744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.578786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.578822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.578851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.578870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.578893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.578911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.578935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.578952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.578975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.004 [2024-07-15 22:47:14.578992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.579016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.004 [2024-07-15 22:47:14.579033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.579075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.579093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.579117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.579134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:02.004 [2024-07-15 22:47:14.579159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.004 [2024-07-15 22:47:14.579177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:02.004 Received shutdown signal, test time was about 33.229589 seconds 00:17:02.004 00:17:02.004 Latency(us) 00:17:02.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.004 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:02.004 Verification LBA range: start 0x0 length 0x4000 00:17:02.004 Nvme0n1 : 33.23 8175.60 31.94 0.00 0.00 15622.81 644.19 4026531.84 00:17:02.004 =================================================================================================================== 00:17:02.004 Total : 8175.60 31.94 0.00 0.00 15622.81 644.19 4026531.84 00:17:02.004 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.263 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.263 rmmod nvme_tcp 00:17:02.522 rmmod nvme_fabrics 00:17:02.522 rmmod nvme_keyring 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76916 ']' 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76916 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76916 ']' 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76916 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76916 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:02.522 killing process with pid 76916 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76916' 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76916 00:17:02.522 22:47:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76916 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:02.781 00:17:02.781 real 0m39.514s 00:17:02.781 user 2m5.963s 00:17:02.781 sys 0m12.876s 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.781 22:47:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:02.781 ************************************ 00:17:02.781 END TEST nvmf_host_multipath_status 00:17:02.781 ************************************ 00:17:02.781 22:47:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:02.781 22:47:18 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:02.781 22:47:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.781 22:47:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.781 22:47:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.781 ************************************ 00:17:02.781 START TEST nvmf_discovery_remove_ifc 00:17:02.781 ************************************ 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:02.781 * Looking for test storage... 00:17:02.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.781 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.040 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:03.041 Cannot find device "nvmf_tgt_br" 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.041 Cannot find device "nvmf_tgt_br2" 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:03.041 Cannot find device "nvmf_tgt_br" 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:03.041 Cannot find device "nvmf_tgt_br2" 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.041 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.041 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:03.300 00:17:03.300 --- 10.0.0.2 ping statistics --- 00:17:03.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.300 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.300 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.300 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:03.300 00:17:03.300 --- 10.0.0.3 ping statistics --- 00:17:03.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.300 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:03.300 00:17:03.300 --- 10.0.0.1 ping statistics --- 00:17:03.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.300 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77749 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77749 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77749 ']' 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.300 22:47:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.300 [2024-07-15 22:47:18.757977] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:17:03.300 [2024-07-15 22:47:18.758087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.558 [2024-07-15 22:47:18.898464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.558 [2024-07-15 22:47:19.011879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.558 [2024-07-15 22:47:19.011958] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.558 [2024-07-15 22:47:19.011970] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.558 [2024-07-15 22:47:19.011978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.558 [2024-07-15 22:47:19.011986] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.558 [2024-07-15 22:47:19.012024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.558 [2024-07-15 22:47:19.068506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.530 [2024-07-15 22:47:19.779388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.530 [2024-07-15 22:47:19.787484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:04.530 null0 00:17:04.530 [2024-07-15 22:47:19.819425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77787 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77787 /tmp/host.sock 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77787 ']' 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:04.530 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.530 22:47:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.530 [2024-07-15 22:47:19.897002] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:17:04.530 [2024-07-15 22:47:19.897516] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77787 ] 00:17:04.530 [2024-07-15 22:47:20.034211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.787 [2024-07-15 22:47:20.166146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.721 22:47:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.721 [2024-07-15 22:47:20.993425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:05.721 22:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.721 22:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:05.721 22:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.721 22:47:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.657 [2024-07-15 22:47:22.043394] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:06.657 [2024-07-15 22:47:22.043466] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:06.657 [2024-07-15 22:47:22.043498] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:06.657 [2024-07-15 22:47:22.049470] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:06.657 [2024-07-15 22:47:22.106787] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:06.657 [2024-07-15 22:47:22.106849] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:06.657 [2024-07-15 22:47:22.106891] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:06.657 [2024-07-15 22:47:22.106912] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:06.657 [2024-07-15 22:47:22.106938] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.657 [2024-07-15 22:47:22.111598] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1614db0 was disconnected and freed. delete nvme_qpair. 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:06.657 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.916 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:06.916 22:47:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:07.852 22:47:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:08.787 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.046 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:09.046 22:47:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:09.980 22:47:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:10.916 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.174 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:11.174 22:47:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.110 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.110 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.110 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.110 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.111 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.111 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:12.111 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.111 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.111 [2024-07-15 22:47:27.534422] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:12.111 [2024-07-15 22:47:27.534487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.111 [2024-07-15 22:47:27.534512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.111 [2024-07-15 22:47:27.534526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.111 [2024-07-15 22:47:27.534535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.111 [2024-07-15 22:47:27.534546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.111 [2024-07-15 22:47:27.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.111 [2024-07-15 22:47:27.534582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.111 [2024-07-15 22:47:27.534594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.111 [2024-07-15 22:47:27.534605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.111 [2024-07-15 22:47:27.534615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.111 [2024-07-15 22:47:27.534624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ecc0 is same with the state(5) to be set 00:17:12.111 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.111 22:47:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.111 [2024-07-15 22:47:27.544416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ecc0 (9): Bad file descriptor 00:17:12.111 [2024-07-15 22:47:27.554437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:13.046 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.304 [2024-07-15 22:47:28.614602] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:13.305 [2024-07-15 22:47:28.614691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157ecc0 with addr=10.0.0.2, port=4420 00:17:13.305 [2024-07-15 22:47:28.614719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ecc0 is same with the state(5) to be set 00:17:13.305 [2024-07-15 22:47:28.614800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ecc0 (9): Bad file descriptor 00:17:13.305 [2024-07-15 22:47:28.615358] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:13.305 [2024-07-15 22:47:28.615391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:13.305 [2024-07-15 22:47:28.615406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:13.305 [2024-07-15 22:47:28.615423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:13.305 [2024-07-15 22:47:28.615462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:13.305 [2024-07-15 22:47:28.615477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:13.305 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.305 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.305 22:47:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:14.247 [2024-07-15 22:47:29.615534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:14.247 [2024-07-15 22:47:29.615608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:14.247 [2024-07-15 22:47:29.615623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:14.247 [2024-07-15 22:47:29.615634] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:14.247 [2024-07-15 22:47:29.615669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.247 [2024-07-15 22:47:29.615700] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:14.247 [2024-07-15 22:47:29.615753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.247 [2024-07-15 22:47:29.615769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.247 [2024-07-15 22:47:29.615784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.247 [2024-07-15 22:47:29.615794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.247 [2024-07-15 22:47:29.615804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.247 [2024-07-15 22:47:29.615813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.247 [2024-07-15 22:47:29.615823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.247 [2024-07-15 22:47:29.615833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.247 [2024-07-15 22:47:29.615843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.247 [2024-07-15 22:47:29.615852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.247 [2024-07-15 22:47:29.615874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:14.247 [2024-07-15 22:47:29.615914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157e940 (9): Bad file descriptor 00:17:14.247 [2024-07-15 22:47:29.616906] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:14.247 [2024-07-15 22:47:29.616923] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:14.247 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:14.248 22:47:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:15.623 22:47:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:16.191 [2024-07-15 22:47:31.627822] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:16.191 [2024-07-15 22:47:31.627864] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:16.191 [2024-07-15 22:47:31.627884] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:16.191 [2024-07-15 22:47:31.633863] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:16.191 [2024-07-15 22:47:31.690320] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:16.191 [2024-07-15 22:47:31.690535] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:16.191 [2024-07-15 22:47:31.690636] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:16.191 [2024-07-15 22:47:31.690734] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:16.191 [2024-07-15 22:47:31.690877] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:16.191 [2024-07-15 22:47:31.696488] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15cbc40 was disconnected and freed. delete nvme_qpair. 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77787 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77787 ']' 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77787 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77787 00:17:16.452 killing process with pid 77787 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77787' 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77787 00:17:16.452 22:47:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77787 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.712 rmmod nvme_tcp 00:17:16.712 rmmod nvme_fabrics 00:17:16.712 rmmod nvme_keyring 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77749 ']' 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77749 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77749 ']' 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77749 00:17:16.712 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77749 00:17:16.971 killing process with pid 77749 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77749' 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77749 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77749 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.971 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.231 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:17.231 00:17:17.231 real 0m14.326s 00:17:17.231 user 0m24.921s 00:17:17.231 sys 0m2.446s 00:17:17.231 ************************************ 00:17:17.231 END TEST nvmf_discovery_remove_ifc 00:17:17.231 ************************************ 00:17:17.231 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.231 22:47:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.231 22:47:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.231 22:47:32 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:17.231 22:47:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.231 22:47:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.231 22:47:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.231 ************************************ 00:17:17.231 START TEST nvmf_identify_kernel_target 00:17:17.231 ************************************ 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:17.231 * Looking for test storage... 00:17:17.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.231 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:17.232 Cannot find device "nvmf_tgt_br" 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.232 Cannot find device "nvmf_tgt_br2" 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:17.232 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:17.232 Cannot find device "nvmf_tgt_br" 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:17.490 Cannot find device "nvmf_tgt_br2" 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.490 22:47:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.490 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.490 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:17.490 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:17.490 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.490 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.490 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:17.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:17:17.749 00:17:17.749 --- 10.0.0.2 ping statistics --- 00:17:17.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.749 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:17.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:17.749 00:17:17.749 --- 10.0.0.3 ping statistics --- 00:17:17.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.749 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:17.749 00:17:17.749 --- 10.0.0.1 ping statistics --- 00:17:17.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.749 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:17.749 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:17.750 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:17.750 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:18.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:18.007 Waiting for block devices as requested 00:17:18.007 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:18.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:18.264 No valid GPT data, bailing 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:18.264 No valid GPT data, bailing 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:18.264 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:18.521 No valid GPT data, bailing 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:18.521 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:18.522 No valid GPT data, bailing 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:18.522 22:47:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -a 10.0.0.1 -t tcp -s 4420 00:17:18.522 00:17:18.522 Discovery Log Number of Records 2, Generation counter 2 00:17:18.522 =====Discovery Log Entry 0====== 00:17:18.522 trtype: tcp 00:17:18.522 adrfam: ipv4 00:17:18.522 subtype: current discovery subsystem 00:17:18.522 treq: not specified, sq flow control disable supported 00:17:18.522 portid: 1 00:17:18.522 trsvcid: 4420 00:17:18.522 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:18.522 traddr: 10.0.0.1 00:17:18.522 eflags: none 00:17:18.522 sectype: none 00:17:18.522 =====Discovery Log Entry 1====== 00:17:18.522 trtype: tcp 00:17:18.522 adrfam: ipv4 00:17:18.522 subtype: nvme subsystem 00:17:18.522 treq: not specified, sq flow control disable supported 00:17:18.522 portid: 1 00:17:18.522 trsvcid: 4420 00:17:18.522 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:18.522 traddr: 10.0.0.1 00:17:18.522 eflags: none 00:17:18.522 sectype: none 00:17:18.522 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:18.522 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:18.781 ===================================================== 00:17:18.781 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:18.781 ===================================================== 00:17:18.781 Controller Capabilities/Features 00:17:18.781 ================================ 00:17:18.781 Vendor ID: 0000 00:17:18.781 Subsystem Vendor ID: 0000 00:17:18.781 Serial Number: 90dcc0c109296969e383 00:17:18.781 Model Number: Linux 00:17:18.781 Firmware Version: 6.7.0-68 00:17:18.781 Recommended Arb Burst: 0 00:17:18.781 IEEE OUI Identifier: 00 00 00 00:17:18.781 Multi-path I/O 00:17:18.781 May have multiple subsystem ports: No 00:17:18.781 May have multiple controllers: No 00:17:18.781 Associated with SR-IOV VF: No 00:17:18.781 Max Data Transfer Size: Unlimited 00:17:18.781 Max Number of Namespaces: 0 00:17:18.781 Max Number of I/O Queues: 1024 00:17:18.781 NVMe Specification Version (VS): 1.3 00:17:18.781 NVMe Specification Version (Identify): 1.3 00:17:18.781 Maximum Queue Entries: 1024 00:17:18.781 Contiguous Queues Required: No 00:17:18.781 Arbitration Mechanisms Supported 00:17:18.781 Weighted Round Robin: Not Supported 00:17:18.781 Vendor Specific: Not Supported 00:17:18.781 Reset Timeout: 7500 ms 00:17:18.781 Doorbell Stride: 4 bytes 00:17:18.781 NVM Subsystem Reset: Not Supported 00:17:18.781 Command Sets Supported 00:17:18.781 NVM Command Set: Supported 00:17:18.781 Boot Partition: Not Supported 00:17:18.781 Memory Page Size Minimum: 4096 bytes 00:17:18.781 Memory Page Size Maximum: 4096 bytes 00:17:18.781 Persistent Memory Region: Not Supported 00:17:18.781 Optional Asynchronous Events Supported 00:17:18.781 Namespace Attribute Notices: Not Supported 00:17:18.781 Firmware Activation Notices: Not Supported 00:17:18.781 ANA Change Notices: Not Supported 00:17:18.781 PLE Aggregate Log Change Notices: Not Supported 00:17:18.781 LBA Status Info Alert Notices: Not Supported 00:17:18.781 EGE Aggregate Log Change Notices: Not Supported 00:17:18.781 Normal NVM Subsystem Shutdown event: Not Supported 00:17:18.781 Zone Descriptor Change Notices: Not Supported 00:17:18.781 Discovery Log Change Notices: Supported 00:17:18.781 Controller Attributes 00:17:18.781 128-bit Host Identifier: Not Supported 00:17:18.781 Non-Operational Permissive Mode: Not Supported 00:17:18.781 NVM Sets: Not Supported 00:17:18.781 Read Recovery Levels: Not Supported 00:17:18.781 Endurance Groups: Not Supported 00:17:18.781 Predictable Latency Mode: Not Supported 00:17:18.781 Traffic Based Keep ALive: Not Supported 00:17:18.781 Namespace Granularity: Not Supported 00:17:18.781 SQ Associations: Not Supported 00:17:18.781 UUID List: Not Supported 00:17:18.781 Multi-Domain Subsystem: Not Supported 00:17:18.781 Fixed Capacity Management: Not Supported 00:17:18.781 Variable Capacity Management: Not Supported 00:17:18.781 Delete Endurance Group: Not Supported 00:17:18.781 Delete NVM Set: Not Supported 00:17:18.781 Extended LBA Formats Supported: Not Supported 00:17:18.781 Flexible Data Placement Supported: Not Supported 00:17:18.781 00:17:18.781 Controller Memory Buffer Support 00:17:18.781 ================================ 00:17:18.781 Supported: No 00:17:18.781 00:17:18.781 Persistent Memory Region Support 00:17:18.781 ================================ 00:17:18.781 Supported: No 00:17:18.781 00:17:18.781 Admin Command Set Attributes 00:17:18.781 ============================ 00:17:18.781 Security Send/Receive: Not Supported 00:17:18.781 Format NVM: Not Supported 00:17:18.781 Firmware Activate/Download: Not Supported 00:17:18.781 Namespace Management: Not Supported 00:17:18.781 Device Self-Test: Not Supported 00:17:18.781 Directives: Not Supported 00:17:18.781 NVMe-MI: Not Supported 00:17:18.781 Virtualization Management: Not Supported 00:17:18.781 Doorbell Buffer Config: Not Supported 00:17:18.781 Get LBA Status Capability: Not Supported 00:17:18.781 Command & Feature Lockdown Capability: Not Supported 00:17:18.781 Abort Command Limit: 1 00:17:18.781 Async Event Request Limit: 1 00:17:18.781 Number of Firmware Slots: N/A 00:17:18.781 Firmware Slot 1 Read-Only: N/A 00:17:18.781 Firmware Activation Without Reset: N/A 00:17:18.781 Multiple Update Detection Support: N/A 00:17:18.781 Firmware Update Granularity: No Information Provided 00:17:18.781 Per-Namespace SMART Log: No 00:17:18.781 Asymmetric Namespace Access Log Page: Not Supported 00:17:18.781 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:18.781 Command Effects Log Page: Not Supported 00:17:18.781 Get Log Page Extended Data: Supported 00:17:18.781 Telemetry Log Pages: Not Supported 00:17:18.781 Persistent Event Log Pages: Not Supported 00:17:18.781 Supported Log Pages Log Page: May Support 00:17:18.781 Commands Supported & Effects Log Page: Not Supported 00:17:18.781 Feature Identifiers & Effects Log Page:May Support 00:17:18.781 NVMe-MI Commands & Effects Log Page: May Support 00:17:18.781 Data Area 4 for Telemetry Log: Not Supported 00:17:18.781 Error Log Page Entries Supported: 1 00:17:18.781 Keep Alive: Not Supported 00:17:18.781 00:17:18.781 NVM Command Set Attributes 00:17:18.781 ========================== 00:17:18.781 Submission Queue Entry Size 00:17:18.781 Max: 1 00:17:18.781 Min: 1 00:17:18.781 Completion Queue Entry Size 00:17:18.781 Max: 1 00:17:18.781 Min: 1 00:17:18.781 Number of Namespaces: 0 00:17:18.781 Compare Command: Not Supported 00:17:18.781 Write Uncorrectable Command: Not Supported 00:17:18.781 Dataset Management Command: Not Supported 00:17:18.781 Write Zeroes Command: Not Supported 00:17:18.781 Set Features Save Field: Not Supported 00:17:18.781 Reservations: Not Supported 00:17:18.781 Timestamp: Not Supported 00:17:18.781 Copy: Not Supported 00:17:18.781 Volatile Write Cache: Not Present 00:17:18.781 Atomic Write Unit (Normal): 1 00:17:18.781 Atomic Write Unit (PFail): 1 00:17:18.781 Atomic Compare & Write Unit: 1 00:17:18.781 Fused Compare & Write: Not Supported 00:17:18.781 Scatter-Gather List 00:17:18.781 SGL Command Set: Supported 00:17:18.781 SGL Keyed: Not Supported 00:17:18.781 SGL Bit Bucket Descriptor: Not Supported 00:17:18.781 SGL Metadata Pointer: Not Supported 00:17:18.781 Oversized SGL: Not Supported 00:17:18.781 SGL Metadata Address: Not Supported 00:17:18.781 SGL Offset: Supported 00:17:18.781 Transport SGL Data Block: Not Supported 00:17:18.781 Replay Protected Memory Block: Not Supported 00:17:18.781 00:17:18.781 Firmware Slot Information 00:17:18.781 ========================= 00:17:18.781 Active slot: 0 00:17:18.781 00:17:18.781 00:17:18.781 Error Log 00:17:18.781 ========= 00:17:18.781 00:17:18.781 Active Namespaces 00:17:18.781 ================= 00:17:18.781 Discovery Log Page 00:17:18.781 ================== 00:17:18.781 Generation Counter: 2 00:17:18.781 Number of Records: 2 00:17:18.781 Record Format: 0 00:17:18.781 00:17:18.781 Discovery Log Entry 0 00:17:18.781 ---------------------- 00:17:18.781 Transport Type: 3 (TCP) 00:17:18.781 Address Family: 1 (IPv4) 00:17:18.781 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:18.781 Entry Flags: 00:17:18.781 Duplicate Returned Information: 0 00:17:18.781 Explicit Persistent Connection Support for Discovery: 0 00:17:18.781 Transport Requirements: 00:17:18.781 Secure Channel: Not Specified 00:17:18.781 Port ID: 1 (0x0001) 00:17:18.781 Controller ID: 65535 (0xffff) 00:17:18.781 Admin Max SQ Size: 32 00:17:18.781 Transport Service Identifier: 4420 00:17:18.781 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:18.781 Transport Address: 10.0.0.1 00:17:18.781 Discovery Log Entry 1 00:17:18.781 ---------------------- 00:17:18.782 Transport Type: 3 (TCP) 00:17:18.782 Address Family: 1 (IPv4) 00:17:18.782 Subsystem Type: 2 (NVM Subsystem) 00:17:18.782 Entry Flags: 00:17:18.782 Duplicate Returned Information: 0 00:17:18.782 Explicit Persistent Connection Support for Discovery: 0 00:17:18.782 Transport Requirements: 00:17:18.782 Secure Channel: Not Specified 00:17:18.782 Port ID: 1 (0x0001) 00:17:18.782 Controller ID: 65535 (0xffff) 00:17:18.782 Admin Max SQ Size: 32 00:17:18.782 Transport Service Identifier: 4420 00:17:18.782 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:18.782 Transport Address: 10.0.0.1 00:17:18.782 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:18.782 get_feature(0x01) failed 00:17:18.782 get_feature(0x02) failed 00:17:18.782 get_feature(0x04) failed 00:17:18.782 ===================================================== 00:17:18.782 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:18.782 ===================================================== 00:17:18.782 Controller Capabilities/Features 00:17:18.782 ================================ 00:17:18.782 Vendor ID: 0000 00:17:18.782 Subsystem Vendor ID: 0000 00:17:18.782 Serial Number: 76fa16e21c7b627a3808 00:17:18.782 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:18.782 Firmware Version: 6.7.0-68 00:17:18.782 Recommended Arb Burst: 6 00:17:18.782 IEEE OUI Identifier: 00 00 00 00:17:18.782 Multi-path I/O 00:17:18.782 May have multiple subsystem ports: Yes 00:17:18.782 May have multiple controllers: Yes 00:17:18.782 Associated with SR-IOV VF: No 00:17:18.782 Max Data Transfer Size: Unlimited 00:17:18.782 Max Number of Namespaces: 1024 00:17:18.782 Max Number of I/O Queues: 128 00:17:18.782 NVMe Specification Version (VS): 1.3 00:17:18.782 NVMe Specification Version (Identify): 1.3 00:17:18.782 Maximum Queue Entries: 1024 00:17:18.782 Contiguous Queues Required: No 00:17:18.782 Arbitration Mechanisms Supported 00:17:18.782 Weighted Round Robin: Not Supported 00:17:18.782 Vendor Specific: Not Supported 00:17:18.782 Reset Timeout: 7500 ms 00:17:18.782 Doorbell Stride: 4 bytes 00:17:18.782 NVM Subsystem Reset: Not Supported 00:17:18.782 Command Sets Supported 00:17:18.782 NVM Command Set: Supported 00:17:18.782 Boot Partition: Not Supported 00:17:18.782 Memory Page Size Minimum: 4096 bytes 00:17:18.782 Memory Page Size Maximum: 4096 bytes 00:17:18.782 Persistent Memory Region: Not Supported 00:17:18.782 Optional Asynchronous Events Supported 00:17:18.782 Namespace Attribute Notices: Supported 00:17:18.782 Firmware Activation Notices: Not Supported 00:17:18.782 ANA Change Notices: Supported 00:17:18.782 PLE Aggregate Log Change Notices: Not Supported 00:17:18.782 LBA Status Info Alert Notices: Not Supported 00:17:18.782 EGE Aggregate Log Change Notices: Not Supported 00:17:18.782 Normal NVM Subsystem Shutdown event: Not Supported 00:17:18.782 Zone Descriptor Change Notices: Not Supported 00:17:18.782 Discovery Log Change Notices: Not Supported 00:17:18.782 Controller Attributes 00:17:18.782 128-bit Host Identifier: Supported 00:17:18.782 Non-Operational Permissive Mode: Not Supported 00:17:18.782 NVM Sets: Not Supported 00:17:18.782 Read Recovery Levels: Not Supported 00:17:18.782 Endurance Groups: Not Supported 00:17:18.782 Predictable Latency Mode: Not Supported 00:17:18.782 Traffic Based Keep ALive: Supported 00:17:18.782 Namespace Granularity: Not Supported 00:17:18.782 SQ Associations: Not Supported 00:17:18.782 UUID List: Not Supported 00:17:18.782 Multi-Domain Subsystem: Not Supported 00:17:18.782 Fixed Capacity Management: Not Supported 00:17:18.782 Variable Capacity Management: Not Supported 00:17:18.782 Delete Endurance Group: Not Supported 00:17:18.782 Delete NVM Set: Not Supported 00:17:18.782 Extended LBA Formats Supported: Not Supported 00:17:18.782 Flexible Data Placement Supported: Not Supported 00:17:18.782 00:17:18.782 Controller Memory Buffer Support 00:17:18.782 ================================ 00:17:18.782 Supported: No 00:17:18.782 00:17:18.782 Persistent Memory Region Support 00:17:18.782 ================================ 00:17:18.782 Supported: No 00:17:18.782 00:17:18.782 Admin Command Set Attributes 00:17:18.782 ============================ 00:17:18.782 Security Send/Receive: Not Supported 00:17:18.782 Format NVM: Not Supported 00:17:18.782 Firmware Activate/Download: Not Supported 00:17:18.782 Namespace Management: Not Supported 00:17:18.782 Device Self-Test: Not Supported 00:17:18.782 Directives: Not Supported 00:17:18.782 NVMe-MI: Not Supported 00:17:18.782 Virtualization Management: Not Supported 00:17:18.782 Doorbell Buffer Config: Not Supported 00:17:18.782 Get LBA Status Capability: Not Supported 00:17:18.782 Command & Feature Lockdown Capability: Not Supported 00:17:18.782 Abort Command Limit: 4 00:17:18.782 Async Event Request Limit: 4 00:17:18.782 Number of Firmware Slots: N/A 00:17:18.782 Firmware Slot 1 Read-Only: N/A 00:17:18.782 Firmware Activation Without Reset: N/A 00:17:18.782 Multiple Update Detection Support: N/A 00:17:18.782 Firmware Update Granularity: No Information Provided 00:17:18.782 Per-Namespace SMART Log: Yes 00:17:18.782 Asymmetric Namespace Access Log Page: Supported 00:17:18.782 ANA Transition Time : 10 sec 00:17:18.782 00:17:18.782 Asymmetric Namespace Access Capabilities 00:17:18.782 ANA Optimized State : Supported 00:17:18.782 ANA Non-Optimized State : Supported 00:17:18.782 ANA Inaccessible State : Supported 00:17:18.782 ANA Persistent Loss State : Supported 00:17:18.782 ANA Change State : Supported 00:17:18.782 ANAGRPID is not changed : No 00:17:18.782 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:18.782 00:17:18.782 ANA Group Identifier Maximum : 128 00:17:18.782 Number of ANA Group Identifiers : 128 00:17:18.782 Max Number of Allowed Namespaces : 1024 00:17:18.782 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:18.782 Command Effects Log Page: Supported 00:17:18.782 Get Log Page Extended Data: Supported 00:17:18.782 Telemetry Log Pages: Not Supported 00:17:18.782 Persistent Event Log Pages: Not Supported 00:17:18.782 Supported Log Pages Log Page: May Support 00:17:18.782 Commands Supported & Effects Log Page: Not Supported 00:17:18.782 Feature Identifiers & Effects Log Page:May Support 00:17:18.782 NVMe-MI Commands & Effects Log Page: May Support 00:17:18.782 Data Area 4 for Telemetry Log: Not Supported 00:17:18.782 Error Log Page Entries Supported: 128 00:17:18.782 Keep Alive: Supported 00:17:18.782 Keep Alive Granularity: 1000 ms 00:17:18.782 00:17:18.782 NVM Command Set Attributes 00:17:18.782 ========================== 00:17:18.782 Submission Queue Entry Size 00:17:18.782 Max: 64 00:17:18.782 Min: 64 00:17:18.782 Completion Queue Entry Size 00:17:18.782 Max: 16 00:17:18.782 Min: 16 00:17:18.782 Number of Namespaces: 1024 00:17:18.782 Compare Command: Not Supported 00:17:18.782 Write Uncorrectable Command: Not Supported 00:17:18.782 Dataset Management Command: Supported 00:17:18.782 Write Zeroes Command: Supported 00:17:18.782 Set Features Save Field: Not Supported 00:17:18.782 Reservations: Not Supported 00:17:18.782 Timestamp: Not Supported 00:17:18.782 Copy: Not Supported 00:17:18.782 Volatile Write Cache: Present 00:17:18.782 Atomic Write Unit (Normal): 1 00:17:18.782 Atomic Write Unit (PFail): 1 00:17:18.782 Atomic Compare & Write Unit: 1 00:17:18.782 Fused Compare & Write: Not Supported 00:17:18.782 Scatter-Gather List 00:17:18.782 SGL Command Set: Supported 00:17:18.782 SGL Keyed: Not Supported 00:17:18.782 SGL Bit Bucket Descriptor: Not Supported 00:17:18.782 SGL Metadata Pointer: Not Supported 00:17:18.782 Oversized SGL: Not Supported 00:17:18.782 SGL Metadata Address: Not Supported 00:17:18.782 SGL Offset: Supported 00:17:18.782 Transport SGL Data Block: Not Supported 00:17:18.782 Replay Protected Memory Block: Not Supported 00:17:18.782 00:17:18.782 Firmware Slot Information 00:17:18.782 ========================= 00:17:18.782 Active slot: 0 00:17:18.782 00:17:18.782 Asymmetric Namespace Access 00:17:18.782 =========================== 00:17:18.782 Change Count : 0 00:17:18.782 Number of ANA Group Descriptors : 1 00:17:18.782 ANA Group Descriptor : 0 00:17:18.782 ANA Group ID : 1 00:17:18.782 Number of NSID Values : 1 00:17:18.782 Change Count : 0 00:17:18.782 ANA State : 1 00:17:18.782 Namespace Identifier : 1 00:17:18.782 00:17:18.782 Commands Supported and Effects 00:17:18.782 ============================== 00:17:18.782 Admin Commands 00:17:18.782 -------------- 00:17:18.782 Get Log Page (02h): Supported 00:17:18.782 Identify (06h): Supported 00:17:18.782 Abort (08h): Supported 00:17:18.782 Set Features (09h): Supported 00:17:18.782 Get Features (0Ah): Supported 00:17:18.782 Asynchronous Event Request (0Ch): Supported 00:17:18.782 Keep Alive (18h): Supported 00:17:18.782 I/O Commands 00:17:18.782 ------------ 00:17:18.782 Flush (00h): Supported 00:17:18.782 Write (01h): Supported LBA-Change 00:17:18.782 Read (02h): Supported 00:17:18.782 Write Zeroes (08h): Supported LBA-Change 00:17:18.782 Dataset Management (09h): Supported 00:17:18.782 00:17:18.782 Error Log 00:17:18.782 ========= 00:17:18.782 Entry: 0 00:17:18.782 Error Count: 0x3 00:17:18.782 Submission Queue Id: 0x0 00:17:18.783 Command Id: 0x5 00:17:18.783 Phase Bit: 0 00:17:18.783 Status Code: 0x2 00:17:18.783 Status Code Type: 0x0 00:17:18.783 Do Not Retry: 1 00:17:19.041 Error Location: 0x28 00:17:19.041 LBA: 0x0 00:17:19.041 Namespace: 0x0 00:17:19.041 Vendor Log Page: 0x0 00:17:19.041 ----------- 00:17:19.041 Entry: 1 00:17:19.041 Error Count: 0x2 00:17:19.041 Submission Queue Id: 0x0 00:17:19.041 Command Id: 0x5 00:17:19.041 Phase Bit: 0 00:17:19.041 Status Code: 0x2 00:17:19.041 Status Code Type: 0x0 00:17:19.041 Do Not Retry: 1 00:17:19.041 Error Location: 0x28 00:17:19.041 LBA: 0x0 00:17:19.041 Namespace: 0x0 00:17:19.041 Vendor Log Page: 0x0 00:17:19.041 ----------- 00:17:19.041 Entry: 2 00:17:19.041 Error Count: 0x1 00:17:19.041 Submission Queue Id: 0x0 00:17:19.041 Command Id: 0x4 00:17:19.041 Phase Bit: 0 00:17:19.041 Status Code: 0x2 00:17:19.041 Status Code Type: 0x0 00:17:19.041 Do Not Retry: 1 00:17:19.041 Error Location: 0x28 00:17:19.041 LBA: 0x0 00:17:19.041 Namespace: 0x0 00:17:19.041 Vendor Log Page: 0x0 00:17:19.041 00:17:19.041 Number of Queues 00:17:19.041 ================ 00:17:19.041 Number of I/O Submission Queues: 128 00:17:19.041 Number of I/O Completion Queues: 128 00:17:19.041 00:17:19.041 ZNS Specific Controller Data 00:17:19.041 ============================ 00:17:19.041 Zone Append Size Limit: 0 00:17:19.041 00:17:19.041 00:17:19.041 Active Namespaces 00:17:19.041 ================= 00:17:19.041 get_feature(0x05) failed 00:17:19.041 Namespace ID:1 00:17:19.041 Command Set Identifier: NVM (00h) 00:17:19.041 Deallocate: Supported 00:17:19.041 Deallocated/Unwritten Error: Not Supported 00:17:19.041 Deallocated Read Value: Unknown 00:17:19.041 Deallocate in Write Zeroes: Not Supported 00:17:19.041 Deallocated Guard Field: 0xFFFF 00:17:19.041 Flush: Supported 00:17:19.041 Reservation: Not Supported 00:17:19.041 Namespace Sharing Capabilities: Multiple Controllers 00:17:19.041 Size (in LBAs): 1310720 (5GiB) 00:17:19.041 Capacity (in LBAs): 1310720 (5GiB) 00:17:19.041 Utilization (in LBAs): 1310720 (5GiB) 00:17:19.041 UUID: dde31d72-dd25-499a-8d99-03ac07a1f850 00:17:19.041 Thin Provisioning: Not Supported 00:17:19.041 Per-NS Atomic Units: Yes 00:17:19.041 Atomic Boundary Size (Normal): 0 00:17:19.041 Atomic Boundary Size (PFail): 0 00:17:19.041 Atomic Boundary Offset: 0 00:17:19.041 NGUID/EUI64 Never Reused: No 00:17:19.041 ANA group ID: 1 00:17:19.041 Namespace Write Protected: No 00:17:19.041 Number of LBA Formats: 1 00:17:19.041 Current LBA Format: LBA Format #00 00:17:19.041 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:19.041 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.041 rmmod nvme_tcp 00:17:19.041 rmmod nvme_fabrics 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:19.041 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:19.042 22:47:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:19.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:19.976 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:19.976 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:19.976 00:17:19.976 real 0m2.766s 00:17:19.976 user 0m0.960s 00:17:19.976 sys 0m1.297s 00:17:19.976 22:47:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.976 22:47:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.976 ************************************ 00:17:19.976 END TEST nvmf_identify_kernel_target 00:17:19.976 ************************************ 00:17:19.976 22:47:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:19.976 22:47:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:19.976 22:47:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:19.976 22:47:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.976 22:47:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.976 ************************************ 00:17:19.976 START TEST nvmf_auth_host 00:17:19.976 ************************************ 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:19.976 * Looking for test storage... 00:17:19.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.976 22:47:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:19.977 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:20.236 Cannot find device "nvmf_tgt_br" 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.236 Cannot find device "nvmf_tgt_br2" 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:20.236 Cannot find device "nvmf_tgt_br" 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:20.236 Cannot find device "nvmf_tgt_br2" 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.236 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:17:20.495 00:17:20.495 --- 10.0.0.2 ping statistics --- 00:17:20.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.495 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:20.495 00:17:20.495 --- 10.0.0.3 ping statistics --- 00:17:20.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.495 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:20.495 00:17:20.495 --- 10.0.0.1 ping statistics --- 00:17:20.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.495 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78666 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78666 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78666 ']' 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.495 22:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.450 22:47:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.450 22:47:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:21.450 22:47:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.450 22:47:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.450 22:47:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1551e27034e1f6b75d93ab20571931a1 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4NV 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1551e27034e1f6b75d93ab20571931a1 0 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1551e27034e1f6b75d93ab20571931a1 0 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1551e27034e1f6b75d93ab20571931a1 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4NV 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4NV 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4NV 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c5b65acf5afdd8f99ab1f5b6d08027de574077873a757dfa9bd204a6f15a1a7 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MyE 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c5b65acf5afdd8f99ab1f5b6d08027de574077873a757dfa9bd204a6f15a1a7 3 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c5b65acf5afdd8f99ab1f5b6d08027de574077873a757dfa9bd204a6f15a1a7 3 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c5b65acf5afdd8f99ab1f5b6d08027de574077873a757dfa9bd204a6f15a1a7 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MyE 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MyE 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MyE 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd3551c5d93b29873ef7e7eae4867c3ebc3e86d1169fe243 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zbW 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd3551c5d93b29873ef7e7eae4867c3ebc3e86d1169fe243 0 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd3551c5d93b29873ef7e7eae4867c3ebc3e86d1169fe243 0 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd3551c5d93b29873ef7e7eae4867c3ebc3e86d1169fe243 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zbW 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zbW 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zbW 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.709 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ad16c8e00958a61f47826f292de72eef2ff09ee987759f41 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.57G 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ad16c8e00958a61f47826f292de72eef2ff09ee987759f41 2 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ad16c8e00958a61f47826f292de72eef2ff09ee987759f41 2 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ad16c8e00958a61f47826f292de72eef2ff09ee987759f41 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:21.710 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.57G 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.57G 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.57G 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5626c224818031ec08ddd738c39d57ee 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CiR 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5626c224818031ec08ddd738c39d57ee 1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5626c224818031ec08ddd738c39d57ee 1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5626c224818031ec08ddd738c39d57ee 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CiR 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CiR 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.CiR 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ba2b2b1835eb9814ab214dc4cf5ee38 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.WhJ 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ba2b2b1835eb9814ab214dc4cf5ee38 1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ba2b2b1835eb9814ab214dc4cf5ee38 1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ba2b2b1835eb9814ab214dc4cf5ee38 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.WhJ 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.WhJ 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.WhJ 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=74955330bbf5c9bc5a18250f36ab341a1c72d9c49e9d4a17 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HXh 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 74955330bbf5c9bc5a18250f36ab341a1c72d9c49e9d4a17 2 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 74955330bbf5c9bc5a18250f36ab341a1c72d9c49e9d4a17 2 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=74955330bbf5c9bc5a18250f36ab341a1c72d9c49e9d4a17 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HXh 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HXh 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HXh 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:21.969 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2aa7b1ecc6d491bddc17eec2dfd45333 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dMv 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2aa7b1ecc6d491bddc17eec2dfd45333 0 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2aa7b1ecc6d491bddc17eec2dfd45333 0 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2aa7b1ecc6d491bddc17eec2dfd45333 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dMv 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dMv 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dMv 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c1ddd4d890583f4609117713f67c0dab33425f39ec2caf3c0b8273f672fde0e 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XGS 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c1ddd4d890583f4609117713f67c0dab33425f39ec2caf3c0b8273f672fde0e 3 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c1ddd4d890583f4609117713f67c0dab33425f39ec2caf3c0b8273f672fde0e 3 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c1ddd4d890583f4609117713f67c0dab33425f39ec2caf3c0b8273f672fde0e 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:21.970 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XGS 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XGS 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.XGS 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78666 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78666 ']' 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.229 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4NV 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MyE ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MyE 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zbW 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.57G ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.57G 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CiR 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.WhJ ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WhJ 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HXh 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dMv ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dMv 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.XGS 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.488 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:22.489 22:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:22.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.747 Waiting for block devices as requested 00:17:22.747 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:23.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:23.574 22:47:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:23.574 No valid GPT data, bailing 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:23.574 No valid GPT data, bailing 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:23.574 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:23.833 No valid GPT data, bailing 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:23.833 No valid GPT data, bailing 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -a 10.0.0.1 -t tcp -s 4420 00:17:23.833 00:17:23.833 Discovery Log Number of Records 2, Generation counter 2 00:17:23.833 =====Discovery Log Entry 0====== 00:17:23.833 trtype: tcp 00:17:23.833 adrfam: ipv4 00:17:23.833 subtype: current discovery subsystem 00:17:23.833 treq: not specified, sq flow control disable supported 00:17:23.833 portid: 1 00:17:23.833 trsvcid: 4420 00:17:23.833 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:23.833 traddr: 10.0.0.1 00:17:23.833 eflags: none 00:17:23.833 sectype: none 00:17:23.833 =====Discovery Log Entry 1====== 00:17:23.833 trtype: tcp 00:17:23.833 adrfam: ipv4 00:17:23.833 subtype: nvme subsystem 00:17:23.833 treq: not specified, sq flow control disable supported 00:17:23.833 portid: 1 00:17:23.833 trsvcid: 4420 00:17:23.833 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:23.833 traddr: 10.0.0.1 00:17:23.833 eflags: none 00:17:23.833 sectype: none 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:23.833 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.093 nvme0n1 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.093 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.351 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.352 nvme0n1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.352 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.610 nvme0n1 00:17:24.610 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.610 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.610 22:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.610 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.610 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.610 22:47:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.610 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.611 nvme0n1 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.611 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.868 nvme0n1 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:24.868 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.869 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.126 nvme0n1 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.126 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.429 22:47:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.688 nvme0n1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.688 nvme0n1 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.688 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 nvme0n1 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.948 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.209 nvme0n1 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.209 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.469 nvme0n1 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.469 22:47:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.033 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.290 nvme0n1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.290 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.549 nvme0n1 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.549 22:47:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.549 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.550 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.550 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.550 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 nvme0n1 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.807 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.065 nvme0n1 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.065 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.323 nvme0n1 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.323 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:28.324 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:28.324 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.324 22:47:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.224 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.483 nvme0n1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.483 22:47:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.741 nvme0n1 00:17:30.741 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.741 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.741 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.741 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.741 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.741 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.002 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 nvme0n1 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.267 22:47:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.833 nvme0n1 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.833 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.834 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.093 nvme0n1 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.093 22:47:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.661 nvme0n1 00:17:32.661 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.661 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.661 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.661 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.661 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.920 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.921 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.489 nvme0n1 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.489 22:47:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.058 nvme0n1 00:17:34.058 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.058 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.058 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.058 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.058 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.058 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.318 22:47:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.886 nvme0n1 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.886 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.453 nvme0n1 00:17:35.453 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.453 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.453 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.453 22:47:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.453 22:47:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.453 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 nvme0n1 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.714 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.715 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 nvme0n1 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 nvme0n1 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.235 nvme0n1 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.235 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.236 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.496 nvme0n1 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.496 22:47:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.497 22:47:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.497 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.497 22:47:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 nvme0n1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 nvme0n1 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.858 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.859 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.124 nvme0n1 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.124 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.383 nvme0n1 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.383 nvme0n1 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.383 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.642 22:47:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.642 nvme0n1 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.642 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.901 nvme0n1 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.901 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.160 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.161 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.419 nvme0n1 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.419 22:47:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.678 nvme0n1 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.678 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.937 nvme0n1 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:38.937 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.938 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 nvme0n1 00:17:39.217 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.217 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.217 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.217 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.217 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.476 22:47:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 nvme0n1 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.736 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.304 nvme0n1 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.304 22:47:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.563 nvme0n1 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.563 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.820 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.078 nvme0n1 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.078 22:47:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.645 nvme0n1 00:17:41.645 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.645 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.645 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.645 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.645 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.645 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.902 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.545 nvme0n1 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:42.545 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.546 22:47:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.112 nvme0n1 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.112 22:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.371 22:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:43.371 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.371 22:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 nvme0n1 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.939 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 nvme0n1 00:17:44.507 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.507 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.507 22:47:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.507 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.507 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 22:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.507 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.765 nvme0n1 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:44.765 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.766 nvme0n1 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.766 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.024 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.024 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.024 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.024 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.024 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 nvme0n1 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.025 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.283 nvme0n1 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.283 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 nvme0n1 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.542 22:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 nvme0n1 00:17:45.542 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.542 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.542 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.542 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.542 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.799 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.799 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.799 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.800 nvme0n1 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:45.800 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.058 nvme0n1 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.058 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.315 nvme0n1 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.315 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.614 nvme0n1 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.614 22:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.614 nvme0n1 00:17:46.614 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.614 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.614 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.614 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.614 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.871 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.128 nvme0n1 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.128 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.129 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.129 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.129 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.393 nvme0n1 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.393 22:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.650 nvme0n1 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.650 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.907 nvme0n1 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.907 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.164 nvme0n1 00:17:48.164 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.164 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.164 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.164 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.164 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.164 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.421 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.422 22:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.678 nvme0n1 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.678 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:48.679 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:48.936 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.936 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.936 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.194 nvme0n1 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.194 22:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.760 nvme0n1 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.760 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.761 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.020 nvme0n1 00:17:50.020 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTU1MWUyNzAzNGUxZjZiNzVkOTNhYjIwNTcxOTMxYTGOfi5O: 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGM1YjY1YWNmNWFmZGQ4Zjk5YWIxZjViNmQwODAyN2RlNTc0MDc3ODczYTc1N2RmYTliZDIwNGE2ZjE1YTFhN9zSBYk=: 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.021 22:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.958 nvme0n1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.958 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.526 nvme0n1 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTYyNmMyMjQ4MTgwMzFlYzA4ZGRkNzM4YzM5ZDU3ZWVfHFnZ: 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJhMmIyYjE4MzVlYjk4MTRhYjIxNGRjNGNmNWVlMzjqlGvf: 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.526 22:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.095 nvme0n1 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQ5NTUzMzBiYmY1YzliYzVhMTgyNTBmMzZhYjM0MWExYzcyZDljNDllOWQ0YTE3cYjMyQ==: 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFhN2IxZWNjNmQ0OTFiZGRjMTdlZWMyZGZkNDUzMzNA5+s/: 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.095 22:48:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.053 nvme0n1 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2MxZGRkNGQ4OTA1ODNmNDYwOTExNzcxM2Y2N2MwZGFiMzM0MjVmMzllYzJjYWYzYzBiODI3M2Y2NzJmZGUwZQpsHBM=: 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.053 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.619 nvme0n1 00:17:53.619 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.619 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.619 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.619 22:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.619 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.619 22:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQzNTUxYzVkOTNiMjk4NzNlZjdlN2VhZTQ4NjdjM2ViYzNlODZkMTE2OWZlMjQzBj6ptg==: 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: ]] 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQxNmM4ZTAwOTU4YTYxZjQ3ODI2ZjI5MmRlNzJlZWYyZmYwOWVlOTg3NzU5ZjQxtzV5Xg==: 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.619 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.620 request: 00:17:53.620 { 00:17:53.620 "name": "nvme0", 00:17:53.620 "trtype": "tcp", 00:17:53.620 "traddr": "10.0.0.1", 00:17:53.620 "adrfam": "ipv4", 00:17:53.620 "trsvcid": "4420", 00:17:53.620 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:53.620 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:53.620 "prchk_reftag": false, 00:17:53.620 "prchk_guard": false, 00:17:53.620 "hdgst": false, 00:17:53.620 "ddgst": false, 00:17:53.620 "method": "bdev_nvme_attach_controller", 00:17:53.620 "req_id": 1 00:17:53.620 } 00:17:53.620 Got JSON-RPC error response 00:17:53.620 response: 00:17:53.620 { 00:17:53.620 "code": -5, 00:17:53.620 "message": "Input/output error" 00:17:53.620 } 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.620 request: 00:17:53.620 { 00:17:53.620 "name": "nvme0", 00:17:53.620 "trtype": "tcp", 00:17:53.620 "traddr": "10.0.0.1", 00:17:53.620 "adrfam": "ipv4", 00:17:53.620 "trsvcid": "4420", 00:17:53.620 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:53.620 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:53.620 "prchk_reftag": false, 00:17:53.620 "prchk_guard": false, 00:17:53.620 "hdgst": false, 00:17:53.620 "ddgst": false, 00:17:53.620 "dhchap_key": "key2", 00:17:53.620 "method": "bdev_nvme_attach_controller", 00:17:53.620 "req_id": 1 00:17:53.620 } 00:17:53.620 Got JSON-RPC error response 00:17:53.620 response: 00:17:53.620 { 00:17:53.620 "code": -5, 00:17:53.620 "message": "Input/output error" 00:17:53.620 } 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.620 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.879 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.879 request: 00:17:53.879 { 00:17:53.879 "name": "nvme0", 00:17:53.879 "trtype": "tcp", 00:17:53.879 "traddr": "10.0.0.1", 00:17:53.879 "adrfam": "ipv4", 00:17:53.879 "trsvcid": "4420", 00:17:53.879 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:53.879 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:53.879 "prchk_reftag": false, 00:17:53.879 "prchk_guard": false, 00:17:53.880 "hdgst": false, 00:17:53.880 "ddgst": false, 00:17:53.880 "dhchap_key": "key1", 00:17:53.880 "dhchap_ctrlr_key": "ckey2", 00:17:53.880 "method": "bdev_nvme_attach_controller", 00:17:53.880 "req_id": 1 00:17:53.880 } 00:17:53.880 Got JSON-RPC error response 00:17:53.880 response: 00:17:53.880 { 00:17:53.880 "code": -5, 00:17:53.880 "message": "Input/output error" 00:17:53.880 } 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.880 rmmod nvme_tcp 00:17:53.880 rmmod nvme_fabrics 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78666 ']' 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78666 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78666 ']' 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78666 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78666 00:17:53.880 killing process with pid 78666 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78666' 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78666 00:17:53.880 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78666 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:54.139 22:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:55.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.074 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.074 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:55.074 22:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4NV /tmp/spdk.key-null.zbW /tmp/spdk.key-sha256.CiR /tmp/spdk.key-sha384.HXh /tmp/spdk.key-sha512.XGS /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:55.074 22:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:55.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.333 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.333 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.592 00:17:55.592 real 0m35.491s 00:17:55.592 user 0m32.109s 00:17:55.592 sys 0m3.648s 00:17:55.592 22:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.592 22:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.592 ************************************ 00:17:55.592 END TEST nvmf_auth_host 00:17:55.592 ************************************ 00:17:55.592 22:48:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.592 22:48:10 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:55.592 22:48:10 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:55.592 22:48:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.592 22:48:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.592 22:48:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.592 ************************************ 00:17:55.592 START TEST nvmf_digest 00:17:55.592 ************************************ 00:17:55.592 22:48:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:55.592 * Looking for test storage... 00:17:55.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.592 22:48:11 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:55.593 Cannot find device "nvmf_tgt_br" 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.593 Cannot find device "nvmf_tgt_br2" 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:55.593 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:55.851 Cannot find device "nvmf_tgt_br" 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:55.851 Cannot find device "nvmf_tgt_br2" 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:55.851 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:55.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:55.852 00:17:55.852 --- 10.0.0.2 ping statistics --- 00:17:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.852 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:55.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:55.852 00:17:55.852 --- 10.0.0.3 ping statistics --- 00:17:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.852 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:55.852 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:55.852 00:17:55.852 --- 10.0.0.1 ping statistics --- 00:17:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.852 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:56.110 ************************************ 00:17:56.110 START TEST nvmf_digest_clean 00:17:56.110 ************************************ 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:56.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80245 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80245 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80245 ']' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.110 22:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:56.110 [2024-07-15 22:48:11.509467] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:17:56.110 [2024-07-15 22:48:11.509620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.110 [2024-07-15 22:48:11.652260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.368 [2024-07-15 22:48:11.780013] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.368 [2024-07-15 22:48:11.780069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.368 [2024-07-15 22:48:11.780084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.368 [2024-07-15 22:48:11.780094] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.368 [2024-07-15 22:48:11.780103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.368 [2024-07-15 22:48:11.780138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.933 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.933 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:56.933 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.933 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.933 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.192 [2024-07-15 22:48:12.585126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:57.192 null0 00:17:57.192 [2024-07-15 22:48:12.636651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.192 [2024-07-15 22:48:12.660734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80277 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80277 /var/tmp/bperf.sock 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80277 ']' 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.192 22:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.192 [2024-07-15 22:48:12.717833] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:17:57.192 [2024-07-15 22:48:12.718111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80277 ] 00:17:57.451 [2024-07-15 22:48:12.859995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.451 [2024-07-15 22:48:12.993857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.385 22:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.385 22:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:58.385 22:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:58.385 22:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:58.385 22:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:58.643 [2024-07-15 22:48:14.019117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:58.643 22:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.643 22:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.901 nvme0n1 00:17:58.901 22:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:58.901 22:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.160 Running I/O for 2 seconds... 00:18:01.059 00:18:01.059 Latency(us) 00:18:01.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.059 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:01.059 nvme0n1 : 2.00 14831.65 57.94 0.00 0.00 8624.41 7745.16 19422.49 00:18:01.059 =================================================================================================================== 00:18:01.059 Total : 14831.65 57.94 0.00 0.00 8624.41 7745.16 19422.49 00:18:01.059 0 00:18:01.059 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:01.059 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:01.059 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:01.059 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:01.059 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:01.059 | select(.opcode=="crc32c") 00:18:01.059 | "\(.module_name) \(.executed)"' 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80277 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80277 ']' 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80277 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80277 00:18:01.317 killing process with pid 80277 00:18:01.317 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.317 00:18:01.317 Latency(us) 00:18:01.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.317 =================================================================================================================== 00:18:01.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80277' 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80277 00:18:01.317 22:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80277 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80343 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80343 /var/tmp/bperf.sock 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80343 ']' 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.575 22:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:01.833 [2024-07-15 22:48:17.157355] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:01.833 [2024-07-15 22:48:17.157670] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80343 ] 00:18:01.833 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:01.833 Zero copy mechanism will not be used. 00:18:01.833 [2024-07-15 22:48:17.296746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.091 [2024-07-15 22:48:17.411598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.659 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.659 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:02.659 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:02.659 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:02.659 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:02.917 [2024-07-15 22:48:18.370568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:02.917 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.917 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.177 nvme0n1 00:18:03.177 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:03.177 22:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.435 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.435 Zero copy mechanism will not be used. 00:18:03.435 Running I/O for 2 seconds... 00:18:05.334 00:18:05.334 Latency(us) 00:18:05.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.334 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:05.334 nvme0n1 : 2.00 7196.28 899.53 0.00 0.00 2220.03 1980.97 11141.12 00:18:05.334 =================================================================================================================== 00:18:05.334 Total : 7196.28 899.53 0.00 0.00 2220.03 1980.97 11141.12 00:18:05.334 0 00:18:05.334 22:48:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:05.334 22:48:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:05.334 22:48:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:05.334 22:48:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:05.334 22:48:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:05.334 | select(.opcode=="crc32c") 00:18:05.334 | "\(.module_name) \(.executed)"' 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80343 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80343 ']' 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80343 00:18:05.592 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80343 00:18:05.850 killing process with pid 80343 00:18:05.850 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.850 00:18:05.850 Latency(us) 00:18:05.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.850 =================================================================================================================== 00:18:05.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80343' 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80343 00:18:05.850 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80343 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80402 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80402 /var/tmp/bperf.sock 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80402 ']' 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.108 22:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.108 [2024-07-15 22:48:21.497968] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:06.108 [2024-07-15 22:48:21.498375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80402 ] 00:18:06.108 [2024-07-15 22:48:21.645595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.368 [2024-07-15 22:48:21.751716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.933 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.933 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:06.933 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:06.933 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:06.933 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:07.191 [2024-07-15 22:48:22.700235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.191 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.191 22:48:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.757 nvme0n1 00:18:07.757 22:48:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:07.757 22:48:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.757 Running I/O for 2 seconds... 00:18:09.655 00:18:09.655 Latency(us) 00:18:09.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.655 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.655 nvme0n1 : 2.00 16098.18 62.88 0.00 0.00 7944.89 4021.53 15073.28 00:18:09.655 =================================================================================================================== 00:18:09.655 Total : 16098.18 62.88 0.00 0.00 7944.89 4021.53 15073.28 00:18:09.655 0 00:18:09.655 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:09.655 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:09.655 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:09.655 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:09.655 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:09.655 | select(.opcode=="crc32c") 00:18:09.655 | "\(.module_name) \(.executed)"' 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80402 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80402 ']' 00:18:10.220 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80402 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80402 00:18:10.221 killing process with pid 80402 00:18:10.221 Received shutdown signal, test time was about 2.000000 seconds 00:18:10.221 00:18:10.221 Latency(us) 00:18:10.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.221 =================================================================================================================== 00:18:10.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80402' 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80402 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80402 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80458 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80458 /var/tmp/bperf.sock 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80458 ']' 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.221 22:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:10.478 [2024-07-15 22:48:25.805642] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:10.478 [2024-07-15 22:48:25.805970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.478 Zero copy mechanism will not be used. 00:18:10.478 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80458 ] 00:18:10.478 [2024-07-15 22:48:25.945744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.736 [2024-07-15 22:48:26.055938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.303 22:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.303 22:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:11.303 22:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:11.303 22:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:11.303 22:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:11.560 [2024-07-15 22:48:27.020729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:11.560 22:48:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.560 22:48:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.817 nvme0n1 00:18:11.817 22:48:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:11.817 22:48:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:12.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.075 Zero copy mechanism will not be used. 00:18:12.075 Running I/O for 2 seconds... 00:18:13.974 00:18:13.974 Latency(us) 00:18:13.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.974 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:13.974 nvme0n1 : 2.00 6394.02 799.25 0.00 0.00 2496.44 2025.66 6851.49 00:18:13.974 =================================================================================================================== 00:18:13.974 Total : 6394.02 799.25 0.00 0.00 2496.44 2025.66 6851.49 00:18:13.974 0 00:18:13.974 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:13.974 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:13.974 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:13.974 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:13.974 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:13.974 | select(.opcode=="crc32c") 00:18:13.974 | "\(.module_name) \(.executed)"' 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80458 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80458 ']' 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80458 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80458 00:18:14.541 killing process with pid 80458 00:18:14.541 Received shutdown signal, test time was about 2.000000 seconds 00:18:14.541 00:18:14.541 Latency(us) 00:18:14.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.541 =================================================================================================================== 00:18:14.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80458' 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80458 00:18:14.541 22:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80458 00:18:14.541 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80245 00:18:14.541 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80245 ']' 00:18:14.541 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80245 00:18:14.541 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:14.541 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.541 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80245 00:18:14.800 killing process with pid 80245 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80245' 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80245 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80245 00:18:14.800 00:18:14.800 real 0m18.902s 00:18:14.800 user 0m36.682s 00:18:14.800 sys 0m4.745s 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.800 ************************************ 00:18:14.800 END TEST nvmf_digest_clean 00:18:14.800 ************************************ 00:18:14.800 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 ************************************ 00:18:15.059 START TEST nvmf_digest_error 00:18:15.059 ************************************ 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80547 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80547 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80547 ']' 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:15.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.059 22:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.059 [2024-07-15 22:48:30.466376] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:15.059 [2024-07-15 22:48:30.466515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.059 [2024-07-15 22:48:30.601537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.318 [2024-07-15 22:48:30.714858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.318 [2024-07-15 22:48:30.714916] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.318 [2024-07-15 22:48:30.714944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.318 [2024-07-15 22:48:30.714951] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.318 [2024-07-15 22:48:30.714958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.318 [2024-07-15 22:48:30.714982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.883 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.883 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:15.883 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.883 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.883 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.142 [2024-07-15 22:48:31.479503] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.142 [2024-07-15 22:48:31.541880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.142 null0 00:18:16.142 [2024-07-15 22:48:31.589846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.142 [2024-07-15 22:48:31.613965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80579 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80579 /var/tmp/bperf.sock 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80579 ']' 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:16.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.142 22:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.142 [2024-07-15 22:48:31.668182] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:16.142 [2024-07-15 22:48:31.668653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80579 ] 00:18:16.400 [2024-07-15 22:48:31.807575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.401 [2024-07-15 22:48:31.930433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.659 [2024-07-15 22:48:31.987176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.225 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.225 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:17.225 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.225 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.484 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:17.484 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.484 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.484 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.484 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.484 22:48:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.742 nvme0n1 00:18:17.742 22:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:17.742 22:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.742 22:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.742 22:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.742 22:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:17.742 22:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:17.999 Running I/O for 2 seconds... 00:18:17.999 [2024-07-15 22:48:33.355305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:17.999 [2024-07-15 22:48:33.355587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.999 [2024-07-15 22:48:33.355731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.999 [2024-07-15 22:48:33.372825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:17.999 [2024-07-15 22:48:33.373027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.999 [2024-07-15 22:48:33.373146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.999 [2024-07-15 22:48:33.390018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:17.999 [2024-07-15 22:48:33.390203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.999 [2024-07-15 22:48:33.390366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.407279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.407457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.407612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.424538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.424735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.424858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.442377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.442555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.442776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.459756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.459930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.460061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.476981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.477158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.477289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.494242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.494282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.494297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.511047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.511088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.511102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.527829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.527868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.527883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.544666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.544703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.544717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.000 [2024-07-15 22:48:33.561490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.000 [2024-07-15 22:48:33.561529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.000 [2024-07-15 22:48:33.561543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.578288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.578328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.595153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.595190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.595220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.612413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.612452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.612467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.629912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.629949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.629963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.646872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.646907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.646938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.663460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.663495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.663525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.680188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.680225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.680240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.697024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.697060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.697090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.713570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.713650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.713665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.258 [2024-07-15 22:48:33.730575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.258 [2024-07-15 22:48:33.730642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.258 [2024-07-15 22:48:33.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.259 [2024-07-15 22:48:33.747019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.259 [2024-07-15 22:48:33.747054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.259 [2024-07-15 22:48:33.747084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.259 [2024-07-15 22:48:33.763924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.259 [2024-07-15 22:48:33.763991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.259 [2024-07-15 22:48:33.764036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.259 [2024-07-15 22:48:33.781503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.259 [2024-07-15 22:48:33.781538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.259 [2024-07-15 22:48:33.781568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.259 [2024-07-15 22:48:33.798106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.259 [2024-07-15 22:48:33.798142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.259 [2024-07-15 22:48:33.798172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.259 [2024-07-15 22:48:33.814932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.259 [2024-07-15 22:48:33.814968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.259 [2024-07-15 22:48:33.814982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.832455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.832493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.832507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.849338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.849374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.849403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.866317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.866353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.866382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.883630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.883680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.883693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.899586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.899632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.899661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.915745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.915779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.915808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.931485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.931519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.931548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.948323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.948360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.948374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.964767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.964816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.964846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.980745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.980793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.980822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:33.997194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:33.997229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:33.997259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:34.013239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:34.013273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:34.013302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:34.029174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:34.029209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:34.029238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:34.046199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:34.046252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:34.046266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:34.063708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.517 [2024-07-15 22:48:34.063742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.517 [2024-07-15 22:48:34.063771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.517 [2024-07-15 22:48:34.080260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.518 [2024-07-15 22:48:34.080321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.518 [2024-07-15 22:48:34.080335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 22:48:34.097157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.776 [2024-07-15 22:48:34.097193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 22:48:34.097223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 22:48:34.114146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.776 [2024-07-15 22:48:34.114183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 22:48:34.114213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 22:48:34.131363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.776 [2024-07-15 22:48:34.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.131427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.148440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.148479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.148494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.164540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.164588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.164603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.180614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.180666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.180681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.197797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.197834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.197848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.214440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.214475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.214504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.230269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.230302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.230332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.246372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.246407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.246436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.262130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.262164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.262193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.278186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.278220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.278249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.294744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.294781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.294795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.312233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.312275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.312305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 22:48:34.329754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:18.777 [2024-07-15 22:48:34.329790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 22:48:34.329819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.346927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.346963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.346977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.363802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.363838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.363852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.380707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.380759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.397687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.397723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.397753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.421760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.421797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.421810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.438426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.438463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.438494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.455370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.455421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.472354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.472391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.472405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.489280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.489316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.489345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.506139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.506176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.506206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.522346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.522381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.522410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.538597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.538631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.538661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.554736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.554777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.554808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.571228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.571265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.571295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.036 [2024-07-15 22:48:34.588082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.036 [2024-07-15 22:48:34.588120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.036 [2024-07-15 22:48:34.588151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.605140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.605180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.605194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.622014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.622069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.622099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.638957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.638995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.639009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.655759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.655796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.655810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.672731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.672768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.672782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.689486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.689524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.689538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.706400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.706440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.706454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.723227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.723264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.723278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.740018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.740057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.740071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.757008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.757045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.757076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.773774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.773810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.773841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.790686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.790722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.790753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.807431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.807468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.807498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.824168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.824206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.824236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.840849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.840886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.840900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.295 [2024-07-15 22:48:34.857722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.295 [2024-07-15 22:48:34.857759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.295 [2024-07-15 22:48:34.857773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.874421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.874457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.874488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.891182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.891218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.891248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.907975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.908030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.908046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.924749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.924785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.924815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.941412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.941450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.941480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.958161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.958228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.974850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.974887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.974900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:34.991451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:34.991489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:34.991519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.008415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.008453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.008466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.025383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.025436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.042025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.042060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.042072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.058795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.058832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.058846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.075837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.075875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.075888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.093130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.093167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.093181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.554 [2024-07-15 22:48:35.110273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.554 [2024-07-15 22:48:35.110312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.554 [2024-07-15 22:48:35.110326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.127238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.127275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.127289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.144150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.144186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.144200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.161152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.161188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.161202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.177666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.177701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.177715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.194138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.194175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.194188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.210819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.210855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.210869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.227178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.227214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.227245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.244253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.244299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.244313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.261093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.261128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.261158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.277960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.277997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.278026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.294998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.295033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.295063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.311534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.311599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.311614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 [2024-07-15 22:48:35.328314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b9a40) 00:18:19.814 [2024-07-15 22:48:35.328351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.814 [2024-07-15 22:48:35.328365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.814 00:18:19.814 Latency(us) 00:18:19.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.814 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:19.814 nvme0n1 : 2.00 15018.90 58.67 0.00 0.00 8515.18 7685.59 32410.53 00:18:19.814 =================================================================================================================== 00:18:19.814 Total : 15018.90 58.67 0.00 0.00 8515.18 7685.59 32410.53 00:18:19.814 0 00:18:19.814 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:19.814 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:19.814 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:19.814 | .driver_specific 00:18:19.814 | .nvme_error 00:18:19.814 | .status_code 00:18:19.814 | .command_transient_transport_error' 00:18:19.814 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80579 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80579 ']' 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80579 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80579 00:18:20.380 killing process with pid 80579 00:18:20.380 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.380 00:18:20.380 Latency(us) 00:18:20.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.380 =================================================================================================================== 00:18:20.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80579' 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80579 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80579 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:20.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80638 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80638 /var/tmp/bperf.sock 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80638 ']' 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.380 22:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.638 [2024-07-15 22:48:35.966914] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:20.638 [2024-07-15 22:48:35.967170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:18:20.638 Zero copy mechanism will not be used. 00:18:20.638 =spdk_pid80638 ] 00:18:20.638 [2024-07-15 22:48:36.104859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.896 [2024-07-15 22:48:36.212377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.896 [2024-07-15 22:48:36.266238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:21.462 22:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.462 22:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:21.462 22:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:21.462 22:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:21.720 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:21.720 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.720 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.720 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.720 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.720 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.978 nvme0n1 00:18:21.978 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:21.978 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.978 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.978 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.978 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:21.978 22:48:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:22.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:22.237 Zero copy mechanism will not be used. 00:18:22.237 Running I/O for 2 seconds... 00:18:22.237 [2024-07-15 22:48:37.672837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.237 [2024-07-15 22:48:37.672907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.237 [2024-07-15 22:48:37.672941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.237 [2024-07-15 22:48:37.677415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.237 [2024-07-15 22:48:37.677641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.237 [2024-07-15 22:48:37.677781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.237 [2024-07-15 22:48:37.682239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.237 [2024-07-15 22:48:37.682403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.237 [2024-07-15 22:48:37.682421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.237 [2024-07-15 22:48:37.686712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.237 [2024-07-15 22:48:37.686750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.237 [2024-07-15 22:48:37.686780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.237 [2024-07-15 22:48:37.690960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.237 [2024-07-15 22:48:37.690999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.237 [2024-07-15 22:48:37.691014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.695354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.695421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.699788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.699826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.699841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.704035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.704085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.704116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.708455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.708495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.708509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.712876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.712914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.712928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.717242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.717280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.717311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.721506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.721545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.721575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.725786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.725825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.725839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.730109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.730146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.730177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.734483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.734521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.734551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.738889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.738927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.738941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.743274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.743312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.743342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.747705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.747742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.747756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.752109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.752163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.752177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.756436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.756474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.756488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.760772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.760812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.760826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.765119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.765159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.765173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.769345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.769399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.769429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.773816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.773853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.773867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.778262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.778302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.778316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.782670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.782707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.782722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.786979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.787016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.787030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.791275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.791314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.791328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.795557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.795747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.795775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.238 [2024-07-15 22:48:37.800095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.238 [2024-07-15 22:48:37.800135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.238 [2024-07-15 22:48:37.800150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.804606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.804644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.804657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.809089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.809190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.813551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.813632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.813647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.817932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.817984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.818013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.822337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.822377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.822408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.826735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.826772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.826802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.830989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.831026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.831056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.835220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.835257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.835287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.839415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.839452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.839481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.843766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.843804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.843818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.848153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.848191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.848221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.852360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.852400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.852414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.856770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.856808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.856822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.861224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.861261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.861290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.865751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.865788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.870070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.870107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.870136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.874468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.874533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.878746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.878782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.878811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.882878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.882930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.882960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.887019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.887066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.887079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.499 [2024-07-15 22:48:37.891118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.499 [2024-07-15 22:48:37.891166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.499 [2024-07-15 22:48:37.891178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.895310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.895358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.895371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.899699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.899746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.899758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.903792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.903842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.903854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.908138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.908186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.908199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.912658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.912705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.912718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.916819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.916866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.916878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.920982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.921030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.921042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.925329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.925376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.925389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.929708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.929767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.929779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.934014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.934060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.934073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.938232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.938265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.938277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.942472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.942505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.946699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.946748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.946760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.951070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.951118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.951130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.955411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.955462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.955475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.959848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.959897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.959909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.964159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.964207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.964219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.968459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.968492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.972876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.972925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.972938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.977323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.977396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.977409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.981631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.981677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.981690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.985887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.985934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.985946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.990169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.990217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.990229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.994638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.994687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.994699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:37.998862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:37.998915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:37.998943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:38.003051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:38.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:38.003111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:38.007224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:38.007272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:38.007284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.500 [2024-07-15 22:48:38.011415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.500 [2024-07-15 22:48:38.011463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.500 [2024-07-15 22:48:38.011475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.015699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.015747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.015760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.019781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.019827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.019840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.023779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.023825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.023838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.027921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.027981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.028024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.032098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.032146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.032158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.036384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.036417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.036430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.040648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.040696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.040708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.045053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.045086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.045099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.049444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.049517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.053793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.053840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.053853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.058098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.058147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.058160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.501 [2024-07-15 22:48:38.062482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.501 [2024-07-15 22:48:38.062532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.501 [2024-07-15 22:48:38.062545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.066838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.066886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.066899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.070928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.070976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.070989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.075141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.075189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.075202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.079364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.079413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.079426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.083538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.083614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.083627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.087795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.087829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.087842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.091983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.092031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.761 [2024-07-15 22:48:38.092043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.761 [2024-07-15 22:48:38.096385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.761 [2024-07-15 22:48:38.096419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.096432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.100714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.100761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.100789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.105030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.105078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.105091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.109254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.109301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.109313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.113512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.113560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.113573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.117844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.117893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.117906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.122069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.122118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.122131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.126524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.126572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.126596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.131019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.131068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.131081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.135516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.135550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.135575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.139719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.139753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.139765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.144016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.144049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.144062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.148404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.148438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.148450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.152726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.152776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.152803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.157101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.157135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.157148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.161321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.161371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.161384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.165633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.165666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.165679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.169936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.169969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.169982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.174150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.174185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.174197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.178308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.178342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.178354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.182624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.182657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.182670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.186875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.186909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.186921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.191205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.191238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.191251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.195462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.195496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.195509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.199675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.199708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.199720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.203988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.204022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.204035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.208312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.208346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.208359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.212620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.212653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.212666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.216892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.216926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.762 [2024-07-15 22:48:38.216938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.762 [2024-07-15 22:48:38.221135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.762 [2024-07-15 22:48:38.221169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.221182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.225355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.225389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.225402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.229642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.229675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.229687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.233934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.233967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.233981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.238303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.238340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.238353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.242627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.242659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.242673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.247291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.247326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.247339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.251483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.251519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.251532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.255807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.255840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.255853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.260037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.260070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.260083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.264360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.264393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.264407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.268611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.268643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.268656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.272788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.272822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.272834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.277044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.277077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.277089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.281374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.281409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.281422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.285583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.285615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.285628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.289880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.289914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.289927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.294158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.294193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.294206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.298431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.298465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.298479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.302685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.302719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.302732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.306938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.306971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.306984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.311111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.311158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.315439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.315473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.315486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.319737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.319771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.319786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.763 [2024-07-15 22:48:38.324239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:22.763 [2024-07-15 22:48:38.324283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.763 [2024-07-15 22:48:38.324297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.023 [2024-07-15 22:48:38.328670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.023 [2024-07-15 22:48:38.328703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.023 [2024-07-15 22:48:38.328716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.023 [2024-07-15 22:48:38.332927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.023 [2024-07-15 22:48:38.332960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.332973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.337177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.337212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.337224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.341355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.341403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.345634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.345667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.345680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.349852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.349898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.354136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.354185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.354197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.358368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.358418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.358431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.362598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.362630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.362643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.366775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.366807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.366820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.370855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.370887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.370900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.374961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.374995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.375008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.379180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.379213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.379226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.383486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.383523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.383536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.387808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.387841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.387855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.392159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.392208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.392221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.396600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.396632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.396645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.400919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.400951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.400980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.405306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.405356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.405384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.409697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.409745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.409758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.414105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.414155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.414168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.418378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.418412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.418425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.422808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.422857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.422870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.427068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.427117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.427129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.431320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.431369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.431381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.435519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.435569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.435627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.439711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.439745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.439757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.443966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.444015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.024 [2024-07-15 22:48:38.444029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.024 [2024-07-15 22:48:38.448335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.024 [2024-07-15 22:48:38.448378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.448394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.452587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.452622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.452635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.456683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.456716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.456728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.460821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.460854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.460867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.465035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.465069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.469225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.469259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.469271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.473473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.473508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.473521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.477828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.477862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.477875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.482052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.482102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.482115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.486311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.486360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.486374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.490761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.490810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.490823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.495064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.495113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.495127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.499318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.499367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.499380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.503658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.503691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.503703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.507966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.507999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.508012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.512166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.512200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.516509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.516544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.516557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.520706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.520769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.520783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.525049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.525099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.525112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.529409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.529459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.529472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.533767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.533816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.533830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.538038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.538087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.538100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.542379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.542429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.542441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.546620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.546668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.546681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.550899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.550932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.550945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.555134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.555183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.555196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.559355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.559405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.559418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.563525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.563599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.567661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.025 [2024-07-15 22:48:38.567706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.025 [2024-07-15 22:48:38.571824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.025 [2024-07-15 22:48:38.571857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.026 [2024-07-15 22:48:38.571869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.026 [2024-07-15 22:48:38.575969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.026 [2024-07-15 22:48:38.576003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.026 [2024-07-15 22:48:38.576015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.026 [2024-07-15 22:48:38.580262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.026 [2024-07-15 22:48:38.580303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.026 [2024-07-15 22:48:38.580316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.026 [2024-07-15 22:48:38.584614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.026 [2024-07-15 22:48:38.584647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.026 [2024-07-15 22:48:38.584659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.026 [2024-07-15 22:48:38.588759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.026 [2024-07-15 22:48:38.588791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.026 [2024-07-15 22:48:38.588803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.285 [2024-07-15 22:48:38.592857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.285 [2024-07-15 22:48:38.592890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-07-15 22:48:38.592902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.285 [2024-07-15 22:48:38.597201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.285 [2024-07-15 22:48:38.597250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-07-15 22:48:38.597263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.285 [2024-07-15 22:48:38.601502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.285 [2024-07-15 22:48:38.601551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-07-15 22:48:38.601564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.285 [2024-07-15 22:48:38.605791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.285 [2024-07-15 22:48:38.605825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-07-15 22:48:38.605838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.285 [2024-07-15 22:48:38.610050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.285 [2024-07-15 22:48:38.610099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.285 [2024-07-15 22:48:38.610112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.614349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.614399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.614412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.618549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.618607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.618619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.622707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.622755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.622767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.626752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.626800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.626812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.630838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.630900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.630913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.635075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.635124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.635137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.639393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.639440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.639453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.643786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.643834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.643864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.648038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.648086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.648099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.652324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.652358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.652371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.656392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.656426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.656438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.660548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.660591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.664790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.664822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.664835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.669037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.669085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.669098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.673255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.673303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.673315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.677661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.677693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.677705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.681950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.682013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.682026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.686334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.686382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.690589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.690637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.690650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.694642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.694689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.694702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.698744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.698792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.698804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.702999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.703049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.703062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.707397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.707429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.707442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.711755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.711802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.711815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.715813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.715861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.715874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.719910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.286 [2024-07-15 22:48:38.719957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.286 [2024-07-15 22:48:38.719970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.286 [2024-07-15 22:48:38.723963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.724011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.724023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.728173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.728222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.728235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.732455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.732501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.736645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.736677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.736689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.740820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.740866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.745033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.745068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.745081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.749235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.749270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.749282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.753523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.753557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.753586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.757870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.757906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.757919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.762135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.762169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.762182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.766538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.766583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.766596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.770817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.770850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.770873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.775195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.775243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.775256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.779665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.779697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.779710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.784081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.784145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.784158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.788340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.788373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.788386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.792571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.792602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.792615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.796771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.796815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.800988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.801021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.805279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.805313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.805326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.809574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.809607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.809619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.813831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.813864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.813877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.818087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.818121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.818134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.822449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.822498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.822511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.826801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.826835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.826848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.831023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.831062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.835257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.835291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.835304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.839592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.287 [2024-07-15 22:48:38.839625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.287 [2024-07-15 22:48:38.839638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.287 [2024-07-15 22:48:38.843877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.288 [2024-07-15 22:48:38.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-07-15 22:48:38.843923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.288 [2024-07-15 22:48:38.848058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.288 [2024-07-15 22:48:38.848092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.288 [2024-07-15 22:48:38.848105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.852376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.852409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.852422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.856750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.856783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.856796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.861061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.861110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.861123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.865289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.865339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.865352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.869681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.869729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.869742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.873935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.873985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.873997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.878293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.878343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.878355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.882518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.882568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.882593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.886693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.886741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.886754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.890973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.891022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.891035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.895263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.895312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.895326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.899452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.899486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.899499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.903697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.903731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.903744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.907827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.907876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.907889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.911983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.912031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.912044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.916159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.916208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.916221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.920472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.920506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.920519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.924815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.924848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.924861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.929041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.929090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.929103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.933324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.933358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.933371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.937624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.937657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.937670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.941916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.941964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.941992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.946333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.946367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.946380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.950727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.950775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.950788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.955203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.955237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.548 [2024-07-15 22:48:38.955250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.548 [2024-07-15 22:48:38.959537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.548 [2024-07-15 22:48:38.959598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.959611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.964034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.964084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.964098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.968620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.968653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.968665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.972902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.972950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.972963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.977171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.977220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.977233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.981397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.981445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.981458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.985720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.985767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.985780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.990018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.990068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.990081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.994271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.994321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.994333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:38.998548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:38.998624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:38.998638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.002773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.002821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.002834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.007087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.007121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.007134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.011369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.011417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.011429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.015606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.015653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.015667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.019885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.019932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.019945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.024036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.024084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.024097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.028121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.028169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.028182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.032447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.032480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.032493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.036915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.036949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.036962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.041113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.041148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.041160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.045520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.045569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.045608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.049933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.049982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.050010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.054397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.054432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.054445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.058902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.058936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.058948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.063310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.063359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.063371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.067680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.067742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.067757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.072146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.072181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.072193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.076467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.076501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.076514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.080725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.549 [2024-07-15 22:48:39.080773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.549 [2024-07-15 22:48:39.080786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.549 [2024-07-15 22:48:39.084867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.084916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.084929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.550 [2024-07-15 22:48:39.089043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.089092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.089105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.550 [2024-07-15 22:48:39.093206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.093254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.093267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.550 [2024-07-15 22:48:39.097575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.097635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.097648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.550 [2024-07-15 22:48:39.101835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.101889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.101902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.550 [2024-07-15 22:48:39.106069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.106119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.106131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.550 [2024-07-15 22:48:39.110519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.550 [2024-07-15 22:48:39.110568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.550 [2024-07-15 22:48:39.110592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.114925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.114973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.114986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.119345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.119379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.119392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.123577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.123624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.123637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.127687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.127734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.127746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.131803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.131836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.131849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.135961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.135994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.136007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.140337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.140370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.140382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.144797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.144846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.144858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.149022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.149086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.149099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.153349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.153399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.153426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.157674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.157706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.157718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.162158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.162206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.162219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.166338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.166386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.166399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.170656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.170703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.174800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.174848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.174861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.178936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.178984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.178997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.183121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.183170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.183182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.187317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.187366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.187379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.191636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.191684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.191697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.195892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.195925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.195938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.200114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.200148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.200160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.204525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.204558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.204585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.208667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.811 [2024-07-15 22:48:39.208699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.811 [2024-07-15 22:48:39.208712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.811 [2024-07-15 22:48:39.212881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.212915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.212935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.217209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.217244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.217256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.221502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.221536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.221549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.225863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.225897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.225910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.230117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.230167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.230180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.234346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.234395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.234408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.238766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.238799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.238813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.243013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.243061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.243074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.247314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.247363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.251616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.251664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.251676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.255723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.255783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.259677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.259724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.259737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.263865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.263913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.263925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.268190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.268239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.268253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.272493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.272527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.272540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.276880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.276929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.276941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.281184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.281232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.281245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.285349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.285397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.285409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.289594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.289668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.289681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.293870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.293902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.293915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.298088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.298122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.298135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.302328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.302362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.302374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.306787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.306836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.306849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.311134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.311183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.311196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.315595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.315654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.315684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.320080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.320127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.320139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.324455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.324489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.324501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.328804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.328837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.328851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.333070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.812 [2024-07-15 22:48:39.333118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.812 [2024-07-15 22:48:39.333130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.812 [2024-07-15 22:48:39.337467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.337515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.337527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.341730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.341778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.341790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.346165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.346199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.346211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.350429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.350479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.350492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.354848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.354896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.358976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.359024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.359036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.363270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.363317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.363329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.367526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.367586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.367601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.813 [2024-07-15 22:48:39.371928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:23.813 [2024-07-15 22:48:39.371962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.813 [2024-07-15 22:48:39.371975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.376614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.376679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.376692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.380784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.380834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.380846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.385015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.385062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.385074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.389581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.389625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.389638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.393701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.393734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.393747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.397991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.398029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.398042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.402214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.402247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.402260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.406634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.406665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.406678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.410848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.410885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.415101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.415138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.415151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.419508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.419543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.419556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.423765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.423801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.423814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.427992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.428025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.428037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.432218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.432252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.432265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.436542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.436599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.436612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.440807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.440841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.440854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.445040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.445073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.445086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.449289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.449323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.449336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.453529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.453574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.453588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.457884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.457917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.457930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.462208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.462242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.462255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.466534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.466579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.466593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.470831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.073 [2024-07-15 22:48:39.470864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.073 [2024-07-15 22:48:39.470877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.073 [2024-07-15 22:48:39.475085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.475119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.475132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.479333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.479379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.483503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.483538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.483550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.487648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.487680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.487693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.491944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.491978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.491991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.496233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.496275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.496288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.500343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.500382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.500395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.504484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.504516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.504529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.508839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.508873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.508886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.513163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.513197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.513210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.517527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.517573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.521773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.521823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.521836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.526073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.526122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.526135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.530322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.530355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.530368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.534687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.534719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.534732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.538804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.538854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.538867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.543047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.543097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.543110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.547327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.547361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.547374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.551657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.551690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.551703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.555822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.555855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.555868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.560058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.560091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.560104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.564398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.564432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.564445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.568720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.568753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.568765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.572961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.572994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.573007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.577361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.577411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.577424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.581561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.581621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.581634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.585848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.585881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.585894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.590157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.590192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.590205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.594480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.594514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.074 [2024-07-15 22:48:39.594527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.074 [2024-07-15 22:48:39.598768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.074 [2024-07-15 22:48:39.598832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.602952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.603014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.607408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.607442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.607456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.611775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.611807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.611818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.615864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.615896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.615908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.620005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.620037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.620049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.624337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.624370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.624383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.628529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.628575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.628589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.632846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.632878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.632890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.075 [2024-07-15 22:48:39.637242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.075 [2024-07-15 22:48:39.637274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.075 [2024-07-15 22:48:39.637286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.641637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.641684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.641698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.645923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.645957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.645999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.650190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.650224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.650236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.654534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.654598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.654612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.658873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.658907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.658920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.663172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.663217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:24.333 [2024-07-15 22:48:39.667466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6b3d0) 00:18:24.333 [2024-07-15 22:48:39.667499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.333 [2024-07-15 22:48:39.667511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:24.333 00:18:24.333 Latency(us) 00:18:24.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.333 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:24.333 nvme0n1 : 2.00 7223.21 902.90 0.00 0.00 2211.57 1854.37 4796.04 00:18:24.333 =================================================================================================================== 00:18:24.333 Total : 7223.21 902.90 0.00 0.00 2211.57 1854.37 4796.04 00:18:24.333 0 00:18:24.333 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:24.333 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:24.333 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:24.333 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:24.333 | .driver_specific 00:18:24.333 | .nvme_error 00:18:24.333 | .status_code 00:18:24.333 | .command_transient_transport_error' 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 466 > 0 )) 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80638 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80638 ']' 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80638 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80638 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:24.592 killing process with pid 80638 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80638' 00:18:24.592 Received shutdown signal, test time was about 2.000000 seconds 00:18:24.592 00:18:24.592 Latency(us) 00:18:24.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.592 =================================================================================================================== 00:18:24.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80638 00:18:24.592 22:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80638 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80694 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80694 /var/tmp/bperf.sock 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80694 ']' 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.850 22:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:24.850 [2024-07-15 22:48:40.236960] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:24.850 [2024-07-15 22:48:40.237037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80694 ] 00:18:24.850 [2024-07-15 22:48:40.371100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.108 [2024-07-15 22:48:40.484428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.108 [2024-07-15 22:48:40.539012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:25.675 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.675 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:25.675 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:25.675 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:25.933 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:25.933 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.933 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:25.933 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.933 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.933 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.192 nvme0n1 00:18:26.451 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:26.451 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.451 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.451 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.451 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:26.451 22:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:26.451 Running I/O for 2 seconds... 00:18:26.451 [2024-07-15 22:48:41.916312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fef90 00:18:26.451 [2024-07-15 22:48:41.918871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:41.918926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:26.451 [2024-07-15 22:48:41.933730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190feb58 00:18:26.451 [2024-07-15 22:48:41.936229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:41.936308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:26.451 [2024-07-15 22:48:41.951161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fe2e8 00:18:26.451 [2024-07-15 22:48:41.953645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:41.953706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:26.451 [2024-07-15 22:48:41.968072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fda78 00:18:26.451 [2024-07-15 22:48:41.970537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:41.970600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:26.451 [2024-07-15 22:48:41.983950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fd208 00:18:26.451 [2024-07-15 22:48:41.986375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:41.986408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:26.451 [2024-07-15 22:48:41.999699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fc998 00:18:26.451 [2024-07-15 22:48:42.002116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:42.002149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:26.451 [2024-07-15 22:48:42.015507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fc128 00:18:26.451 [2024-07-15 22:48:42.017943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.451 [2024-07-15 22:48:42.017974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.031330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fb8b8 00:18:26.711 [2024-07-15 22:48:42.033767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.033798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.047122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fb048 00:18:26.711 [2024-07-15 22:48:42.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.049549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.062824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fa7d8 00:18:26.711 [2024-07-15 22:48:42.065185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.065230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.078229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f9f68 00:18:26.711 [2024-07-15 22:48:42.080634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.080665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.093681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f96f8 00:18:26.711 [2024-07-15 22:48:42.095945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.096009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.109058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f8e88 00:18:26.711 [2024-07-15 22:48:42.111265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.111310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.124689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f8618 00:18:26.711 [2024-07-15 22:48:42.126923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.126954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.140149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f7da8 00:18:26.711 [2024-07-15 22:48:42.142381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.142425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.155500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f7538 00:18:26.711 [2024-07-15 22:48:42.157703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.157748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.170906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f6cc8 00:18:26.711 [2024-07-15 22:48:42.173212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.173256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.186421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f6458 00:18:26.711 [2024-07-15 22:48:42.188682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.188738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.202676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f5be8 00:18:26.711 [2024-07-15 22:48:42.204831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.204866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.219202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f5378 00:18:26.711 [2024-07-15 22:48:42.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.221445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.235229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f4b08 00:18:26.711 [2024-07-15 22:48:42.237337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.237375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.251081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f4298 00:18:26.711 [2024-07-15 22:48:42.253208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.253244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:26.711 [2024-07-15 22:48:42.266897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f3a28 00:18:26.711 [2024-07-15 22:48:42.268969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.711 [2024-07-15 22:48:42.269004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:26.970 [2024-07-15 22:48:42.282906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f31b8 00:18:26.970 [2024-07-15 22:48:42.284990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.970 [2024-07-15 22:48:42.285030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:26.970 [2024-07-15 22:48:42.298987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f2948 00:18:26.970 [2024-07-15 22:48:42.301061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.970 [2024-07-15 22:48:42.301099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:26.970 [2024-07-15 22:48:42.314918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f20d8 00:18:26.970 [2024-07-15 22:48:42.316950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.970 [2024-07-15 22:48:42.317001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:26.970 [2024-07-15 22:48:42.330699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f1868 00:18:26.970 [2024-07-15 22:48:42.332739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.970 [2024-07-15 22:48:42.332774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:26.970 [2024-07-15 22:48:42.346863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f0ff8 00:18:26.970 [2024-07-15 22:48:42.348911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.362479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f0788 00:18:26.971 [2024-07-15 22:48:42.364488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.364526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.377719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eff18 00:18:26.971 [2024-07-15 22:48:42.379627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.379660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.393659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ef6a8 00:18:26.971 [2024-07-15 22:48:42.395560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.395634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.409147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eee38 00:18:26.971 [2024-07-15 22:48:42.411103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.411136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.424568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ee5c8 00:18:26.971 [2024-07-15 22:48:42.426533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.426591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.440562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190edd58 00:18:26.971 [2024-07-15 22:48:42.442415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.442451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.456423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ed4e8 00:18:26.971 [2024-07-15 22:48:42.458270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.458305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.472433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ecc78 00:18:26.971 [2024-07-15 22:48:42.474272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.474307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.488365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ec408 00:18:26.971 [2024-07-15 22:48:42.490222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.490256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.504306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ebb98 00:18:26.971 [2024-07-15 22:48:42.506156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.506190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.519938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eb328 00:18:26.971 [2024-07-15 22:48:42.521758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:26.971 [2024-07-15 22:48:42.521794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:26.971 [2024-07-15 22:48:42.535577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eaab8 00:18:27.229 [2024-07-15 22:48:42.537414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.537450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.551304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ea248 00:18:27.229 [2024-07-15 22:48:42.553062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.553100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.566755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e99d8 00:18:27.229 [2024-07-15 22:48:42.568551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.568598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.582753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e9168 00:18:27.229 [2024-07-15 22:48:42.584427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.584462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.598632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e88f8 00:18:27.229 [2024-07-15 22:48:42.600293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.600334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.614454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e8088 00:18:27.229 [2024-07-15 22:48:42.616165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.616200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.630035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e7818 00:18:27.229 [2024-07-15 22:48:42.631684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.631718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.645679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e6fa8 00:18:27.229 [2024-07-15 22:48:42.647351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.647402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.661445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e6738 00:18:27.229 [2024-07-15 22:48:42.663107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.663140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.677239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e5ec8 00:18:27.229 [2024-07-15 22:48:42.678834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.678869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.693024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e5658 00:18:27.229 [2024-07-15 22:48:42.694587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.694629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.708860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e4de8 00:18:27.229 [2024-07-15 22:48:42.710393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.710429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.724646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e4578 00:18:27.229 [2024-07-15 22:48:42.726164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.726199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.740259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e3d08 00:18:27.229 [2024-07-15 22:48:42.741775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.741810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.755824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e3498 00:18:27.229 [2024-07-15 22:48:42.757304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.757339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.771664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e2c28 00:18:27.229 [2024-07-15 22:48:42.773112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.773150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:27.229 [2024-07-15 22:48:42.787082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e23b8 00:18:27.229 [2024-07-15 22:48:42.788594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.229 [2024-07-15 22:48:42.788628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.802964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e1b48 00:18:27.488 [2024-07-15 22:48:42.804452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.818819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e12d8 00:18:27.488 [2024-07-15 22:48:42.820265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.820324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.834898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e0a68 00:18:27.488 [2024-07-15 22:48:42.836273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.836324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.850623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e01f8 00:18:27.488 [2024-07-15 22:48:42.851964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.851999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.866099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190df988 00:18:27.488 [2024-07-15 22:48:42.867472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.867506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.881753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190df118 00:18:27.488 [2024-07-15 22:48:42.883112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.883146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.897647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190de8a8 00:18:27.488 [2024-07-15 22:48:42.898930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.898966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.913638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190de038 00:18:27.488 [2024-07-15 22:48:42.914902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.914942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.936756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190de038 00:18:27.488 [2024-07-15 22:48:42.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.939301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.952742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190de8a8 00:18:27.488 [2024-07-15 22:48:42.955199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.955236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.968666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190df118 00:18:27.488 [2024-07-15 22:48:42.971127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.971163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:42.984330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190df988 00:18:27.488 [2024-07-15 22:48:42.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:42.986832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:43.000131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e01f8 00:18:27.488 [2024-07-15 22:48:43.002567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:43.002629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:43.016008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e0a68 00:18:27.488 [2024-07-15 22:48:43.018392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:43.018428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:43.031887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e12d8 00:18:27.488 [2024-07-15 22:48:43.034274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:43.034310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:27.488 [2024-07-15 22:48:43.047889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e1b48 00:18:27.488 [2024-07-15 22:48:43.050257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.488 [2024-07-15 22:48:43.050293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:27.747 [2024-07-15 22:48:43.063778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e23b8 00:18:27.747 [2024-07-15 22:48:43.066121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.747 [2024-07-15 22:48:43.066156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:27.747 [2024-07-15 22:48:43.079506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e2c28 00:18:27.748 [2024-07-15 22:48:43.081833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.081867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.095320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e3498 00:18:27.748 [2024-07-15 22:48:43.097660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.097694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.111008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e3d08 00:18:27.748 [2024-07-15 22:48:43.113326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.113361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.126820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e4578 00:18:27.748 [2024-07-15 22:48:43.129099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.129136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.142565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e4de8 00:18:27.748 [2024-07-15 22:48:43.144803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.158381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e5658 00:18:27.748 [2024-07-15 22:48:43.160603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.160638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.174010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e5ec8 00:18:27.748 [2024-07-15 22:48:43.176188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.176223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.189868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e6738 00:18:27.748 [2024-07-15 22:48:43.192031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.192065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.205725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e6fa8 00:18:27.748 [2024-07-15 22:48:43.207897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.207932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.221713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e7818 00:18:27.748 [2024-07-15 22:48:43.223839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.223873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.237803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e8088 00:18:27.748 [2024-07-15 22:48:43.239965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.240014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.253747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e88f8 00:18:27.748 [2024-07-15 22:48:43.255871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.255906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.269128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e9168 00:18:27.748 [2024-07-15 22:48:43.271266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.271299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.284974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190e99d8 00:18:27.748 [2024-07-15 22:48:43.287051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.287088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:27.748 [2024-07-15 22:48:43.301387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ea248 00:18:27.748 [2024-07-15 22:48:43.303521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.748 [2024-07-15 22:48:43.303609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.317730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eaab8 00:18:28.007 [2024-07-15 22:48:43.319731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.319768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.333331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eb328 00:18:28.007 [2024-07-15 22:48:43.335336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.335372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.349174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ebb98 00:18:28.007 [2024-07-15 22:48:43.351168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.351209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.365003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ec408 00:18:28.007 [2024-07-15 22:48:43.366957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.366996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.380766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ecc78 00:18:28.007 [2024-07-15 22:48:43.382701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.382738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.396485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ed4e8 00:18:28.007 [2024-07-15 22:48:43.398411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.398447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.412216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190edd58 00:18:28.007 [2024-07-15 22:48:43.414120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.414155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.427926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ee5c8 00:18:28.007 [2024-07-15 22:48:43.429810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.429846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.443604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eee38 00:18:28.007 [2024-07-15 22:48:43.445460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.445497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.459330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190ef6a8 00:18:28.007 [2024-07-15 22:48:43.461218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.461252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.475247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190eff18 00:18:28.007 [2024-07-15 22:48:43.477087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.477122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.491165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f0788 00:18:28.007 [2024-07-15 22:48:43.492973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.493009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.507058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f0ff8 00:18:28.007 [2024-07-15 22:48:43.508895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.508944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.523315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f1868 00:18:28.007 [2024-07-15 22:48:43.525107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.525141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.539061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f20d8 00:18:28.007 [2024-07-15 22:48:43.540833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.540866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.554778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f2948 00:18:28.007 [2024-07-15 22:48:43.556541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.556588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:28.007 [2024-07-15 22:48:43.570069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f31b8 00:18:28.007 [2024-07-15 22:48:43.571867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.007 [2024-07-15 22:48:43.571901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.585734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f3a28 00:18:28.266 [2024-07-15 22:48:43.587420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.587453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.601512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f4298 00:18:28.266 [2024-07-15 22:48:43.603267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.603302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.617239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f4b08 00:18:28.266 [2024-07-15 22:48:43.618936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.618984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.632485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f5378 00:18:28.266 [2024-07-15 22:48:43.634109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.634142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.647717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f5be8 00:18:28.266 [2024-07-15 22:48:43.649347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.649383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.663363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f6458 00:18:28.266 [2024-07-15 22:48:43.665011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.665044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.678823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f6cc8 00:18:28.266 [2024-07-15 22:48:43.680443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.680479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.694020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f7538 00:18:28.266 [2024-07-15 22:48:43.695566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.695640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.709462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f7da8 00:18:28.266 [2024-07-15 22:48:43.711020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.711054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.724885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f8618 00:18:28.266 [2024-07-15 22:48:43.726399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.726432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.742473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f8e88 00:18:28.266 [2024-07-15 22:48:43.743999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.744056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.759504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f96f8 00:18:28.266 [2024-07-15 22:48:43.761036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.761100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.776846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190f9f68 00:18:28.266 [2024-07-15 22:48:43.778316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.778369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.794116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fa7d8 00:18:28.266 [2024-07-15 22:48:43.795589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.795696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.811225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fb048 00:18:28.266 [2024-07-15 22:48:43.812804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.812860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:28.266 [2024-07-15 22:48:43.829515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fb8b8 00:18:28.266 [2024-07-15 22:48:43.831037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.266 [2024-07-15 22:48:43.831093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:28.524 [2024-07-15 22:48:43.847076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fc128 00:18:28.524 [2024-07-15 22:48:43.848511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.524 [2024-07-15 22:48:43.848586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:28.524 [2024-07-15 22:48:43.864439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fc998 00:18:28.524 [2024-07-15 22:48:43.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.524 [2024-07-15 22:48:43.865872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:28.524 [2024-07-15 22:48:43.880182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fd208 00:18:28.524 [2024-07-15 22:48:43.881545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.524 [2024-07-15 22:48:43.881605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:28.524 [2024-07-15 22:48:43.895984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8a70) with pdu=0x2000190fda78 00:18:28.524 [2024-07-15 22:48:43.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.524 [2024-07-15 22:48:43.897365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:28.524 00:18:28.524 Latency(us) 00:18:28.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.524 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.524 nvme0n1 : 2.01 15901.04 62.11 0.00 0.00 8043.29 2398.02 33602.09 00:18:28.524 =================================================================================================================== 00:18:28.524 Total : 15901.04 62.11 0.00 0.00 8043.29 2398.02 33602.09 00:18:28.524 0 00:18:28.524 22:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:28.524 22:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:28.524 22:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:28.524 | .driver_specific 00:18:28.524 | .nvme_error 00:18:28.524 | .status_code 00:18:28.524 | .command_transient_transport_error' 00:18:28.524 22:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80694 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80694 ']' 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80694 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80694 00:18:28.782 killing process with pid 80694 00:18:28.782 Received shutdown signal, test time was about 2.000000 seconds 00:18:28.782 00:18:28.782 Latency(us) 00:18:28.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.782 =================================================================================================================== 00:18:28.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80694' 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80694 00:18:28.782 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80694 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80753 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80753 /var/tmp/bperf.sock 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80753 ']' 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:29.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.040 22:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:29.040 [2024-07-15 22:48:44.488292] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:29.040 [2024-07-15 22:48:44.488633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80753 ] 00:18:29.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:29.040 Zero copy mechanism will not be used. 00:18:29.298 [2024-07-15 22:48:44.630115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.298 [2024-07-15 22:48:44.735099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.298 [2024-07-15 22:48:44.788084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.230 22:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.488 nvme0n1 00:18:30.488 22:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:30.488 22:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.488 22:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.488 22:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.488 22:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:30.488 22:48:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:30.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:30.746 Zero copy mechanism will not be used. 00:18:30.746 Running I/O for 2 seconds... 00:18:30.746 [2024-07-15 22:48:46.115926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.746 [2024-07-15 22:48:46.116248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.746 [2024-07-15 22:48:46.116328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.746 [2024-07-15 22:48:46.121357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.746 [2024-07-15 22:48:46.121707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.746 [2024-07-15 22:48:46.121741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.746 [2024-07-15 22:48:46.126510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.746 [2024-07-15 22:48:46.126818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.746 [2024-07-15 22:48:46.126847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.746 [2024-07-15 22:48:46.131697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.746 [2024-07-15 22:48:46.132015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.132044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.136782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.137101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.137129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.142072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.142374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.142404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.147324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.147657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.147685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.152539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.152852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.152885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.157723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.158034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.158061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.162791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.163087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.163115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.167815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.168131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.168158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.172987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.173310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.173339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.178106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.178404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.178432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.183186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.183474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.188328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.188642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.188670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.193454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.193796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.193828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.198624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.198918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.198946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.203731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.204037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.204063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.208876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.209173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.209201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.214027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.214334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.214363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.219064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.219366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.219394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.224178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.224496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.224524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.229197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.229517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.234361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.234666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.234698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.239490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.239834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.239866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.244714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.244999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.245029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.249783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.250071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.250094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.255036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.255519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.255723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.260816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.261313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.261458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.266209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.266554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.266605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.271404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.271753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.271780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.276510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.276828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.747 [2024-07-15 22:48:46.276860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.747 [2024-07-15 22:48:46.281717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.747 [2024-07-15 22:48:46.282053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.748 [2024-07-15 22:48:46.282081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.748 [2024-07-15 22:48:46.286959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.748 [2024-07-15 22:48:46.287268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.748 [2024-07-15 22:48:46.287296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.748 [2024-07-15 22:48:46.292089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.748 [2024-07-15 22:48:46.292419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.748 [2024-07-15 22:48:46.292447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.748 [2024-07-15 22:48:46.297384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.748 [2024-07-15 22:48:46.297745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.748 [2024-07-15 22:48:46.297776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.748 [2024-07-15 22:48:46.302591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.748 [2024-07-15 22:48:46.302884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.748 [2024-07-15 22:48:46.302911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.748 [2024-07-15 22:48:46.307738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:30.748 [2024-07-15 22:48:46.308056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.748 [2024-07-15 22:48:46.308089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.313088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.313385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.313414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.318324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.318652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.318676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.323379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.323717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.323748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.328493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.328799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.328842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.333678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.333994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.334023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.338764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.339100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.339127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.343883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.344174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.344203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.348958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.349269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.349303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.354133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.354431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.354459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.359332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.359638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.359665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.364433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.364748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.364775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.369509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.369846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.369877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.374712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.375017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.375045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.379859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.380150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.380178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.385106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.385414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.390213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.390537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.390574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.395307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.395633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.395660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.400238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.400572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.400611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.405385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.405716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.405747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.410463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.410807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.410838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.415536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.006 [2024-07-15 22:48:46.415873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.006 [2024-07-15 22:48:46.415921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.006 [2024-07-15 22:48:46.420661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.420954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.420981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.425648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.425945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.425972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.430716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.431015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.431042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.435829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.436332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.436370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.441238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.441535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.441573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.446363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.446720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.446756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.451507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.451816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.451843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.456570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.456876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.456903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.461659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.461947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.461973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.466638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.466935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.466961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.471629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.471937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.471974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.476715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.477026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.477053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.481803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.482108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.482135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.487105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.487422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.487451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.492258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.492576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.492603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.497423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.497730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.502421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.507610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.507905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.507942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.512745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.513037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.513065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.517830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.518126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.518153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.522947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.523247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.523274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.528086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.528404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.528432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.533273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.533579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.533607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.538424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.538744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.538772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.543634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.543932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.543969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.548701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.548993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.549024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.553825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.554117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.554157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.558996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.559322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.559355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.564279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.007 [2024-07-15 22:48:46.564609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.007 [2024-07-15 22:48:46.564641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.007 [2024-07-15 22:48:46.569617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.008 [2024-07-15 22:48:46.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.008 [2024-07-15 22:48:46.569955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.574814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.575109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.575140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.579996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.580335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.580366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.585124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.585448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.585481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.590208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.590532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.590573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.595374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.595716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.595747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.600505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.600809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.600841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.605805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.606116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.606150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.610962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.611283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.611316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.616017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.616358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.616390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.621127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.621432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.621456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.626251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.626576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.626618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.631416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.631753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.631785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.636556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.636868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.636903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.641658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.641966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.641997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.646748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.647058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.647090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.651814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.652128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.652159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.656907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.657232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.657265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.662074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.662395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.662429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.667167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.667488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.667520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.672245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.672563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.672605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.677285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.677581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.677622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.682414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.682788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.687522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.687833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.687865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.692700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.692998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.693030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.697790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.698092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.698123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.702901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.703229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.707997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.708312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.713061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.713355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.713387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.718177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.266 [2024-07-15 22:48:46.718502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.266 [2024-07-15 22:48:46.718537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.266 [2024-07-15 22:48:46.723338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.723645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.723669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.728406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.728722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.728753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.733540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.733846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.733880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.738594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.738885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.738920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.743608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.743899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.743933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.748705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.748996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.749029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.753764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.754064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.758905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.759229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.764049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.764362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.764399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.769194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.769522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.774235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.774528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.774572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.779334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.779641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.779676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.784436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.784756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.784786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.789497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.789806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.789836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.794527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.794832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.794866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.799631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.799958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.804706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.805031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.809754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.810050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.810083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.814832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.815121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.815151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.819891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.820182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.820213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.824999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.825294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.825329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.267 [2024-07-15 22:48:46.830090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.267 [2024-07-15 22:48:46.830388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.267 [2024-07-15 22:48:46.830420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.835171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.835463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.835495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.840279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.840592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.840623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.845374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.850497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.850800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.850831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.855510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.855817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.855848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.860630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.860922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.860953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.865687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.865978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.866010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.870748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.871042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.871073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.875787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.876081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.876102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.880838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.881133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.881164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.885832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.886123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.886154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.890851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.891149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.891181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.895947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.896278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.896309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.901081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.901390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.901422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.906142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.906479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.911232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.911539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.911581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.916278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.916600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.916631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.921417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.921744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.921775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.926550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.926873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.926896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.931687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.932001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.932032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.936772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.937063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.941820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.942129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.942160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.946863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.947158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.947189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.951887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.952197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.952232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.956953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.957261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.957292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.962039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.962347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.962379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.967091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.967399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.967431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.972205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.972522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.972574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.977320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.977660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.977691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.982529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.982834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.982866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.987748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.988055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.988086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.992925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.993228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.993256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:46.997987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:46.998309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:46.998354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.003129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.003425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.003458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.008525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.008832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.008863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.013753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.014062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.014093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.018795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.019091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.019123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.023932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.024254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.024297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.029374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.029732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.029768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.034650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.034955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.034986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.039844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.040173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.040205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.045226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.045561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.045602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.050573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.050895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.050926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.055662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.055958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.055988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.060848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.061140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.061172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.066120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.066471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.066513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.071369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.071735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.071766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.076497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.076819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.076858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.081799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.082132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.082164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.526 [2024-07-15 22:48:47.086914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.526 [2024-07-15 22:48:47.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.526 [2024-07-15 22:48:47.087265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.092346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.783 [2024-07-15 22:48:47.092651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.783 [2024-07-15 22:48:47.092682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.097438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.783 [2024-07-15 22:48:47.097795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.783 [2024-07-15 22:48:47.097825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.102589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.783 [2024-07-15 22:48:47.102916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.783 [2024-07-15 22:48:47.102948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.107778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.783 [2024-07-15 22:48:47.108121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.783 [2024-07-15 22:48:47.108151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.113022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.783 [2024-07-15 22:48:47.113350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.783 [2024-07-15 22:48:47.113382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.118250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.783 [2024-07-15 22:48:47.118557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.783 [2024-07-15 22:48:47.118579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.783 [2024-07-15 22:48:47.123426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.123770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.123801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.128616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.128908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.128938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.133750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.134055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.134086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.138841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.139142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.139174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.144152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.144459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.144490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.149501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.149811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.149843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.154751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.155042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.155073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.160061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.160362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.160393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.165466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.165793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.165824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.170781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.171096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.171127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.176058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.176378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.176409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.181226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.181552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.181596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.186458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.186803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.186834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.191717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.192033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.192065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.196835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.197130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.197161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.202230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.202525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.202556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.207362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.207669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.207701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.212637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.212929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.212960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.217838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.218147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.222984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.223305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.223337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.228127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.228470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.228501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.233317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.233666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.233710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.238543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.238891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.238922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.243740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.244065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.244096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.248822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.249180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.249211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.254041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.254403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.259287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.259620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.259661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.264595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.264899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.264931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.269838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.270173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.270204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.275160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.275467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.275498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.280430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.280733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.280764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.285779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.286119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.286151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.291111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.291414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.291448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.296436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.296742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.296773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.301776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.302131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.302162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.307245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.307542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.307582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.312729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.313067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.313098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.318018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.318329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.318361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.323100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.323408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.323440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.328159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.784 [2024-07-15 22:48:47.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.784 [2024-07-15 22:48:47.328497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.784 [2024-07-15 22:48:47.333272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.785 [2024-07-15 22:48:47.333582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.785 [2024-07-15 22:48:47.333613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.785 [2024-07-15 22:48:47.338560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.785 [2024-07-15 22:48:47.338910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.785 [2024-07-15 22:48:47.338940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:31.785 [2024-07-15 22:48:47.343815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.785 [2024-07-15 22:48:47.344141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.785 [2024-07-15 22:48:47.344172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.785 [2024-07-15 22:48:47.349122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:31.785 [2024-07-15 22:48:47.349472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.785 [2024-07-15 22:48:47.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.043 [2024-07-15 22:48:47.354358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.043 [2024-07-15 22:48:47.354687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.043 [2024-07-15 22:48:47.354717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.043 [2024-07-15 22:48:47.359493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.043 [2024-07-15 22:48:47.359817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.043 [2024-07-15 22:48:47.359847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.043 [2024-07-15 22:48:47.364757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.365051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.365081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.370035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.370352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.370384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.375467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.375806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.375835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.380652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.380944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.380975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.385849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.386205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.386236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.391237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.391530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.391572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.396761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.397098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.397129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.402342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.402694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.402725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.407732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.408084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.408114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.413088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.413380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.413412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.418354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.418687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.418717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.423715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.424024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.424055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.429002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.429326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.429357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.434386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.434748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.434777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.439768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.440144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.445004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.445312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.445343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.450341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.450673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.450703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.455705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.455999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.456030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.461028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.461359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.461390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.466310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.466640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.466680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.471825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.472144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.472175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.477087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.477378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.477410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.482407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.482751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.482782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.487740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.488064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.493108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.493405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.493436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.498314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.498633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.498676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.503784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.504110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.504141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.509123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.509451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.044 [2024-07-15 22:48:47.514360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.044 [2024-07-15 22:48:47.514663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.044 [2024-07-15 22:48:47.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.519542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.519865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.519895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.524859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.525187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.525218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.530207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.530537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.530576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.535526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.535873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.535903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.540893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.541211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.546166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.546518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.546549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.551526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.551859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.551889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.556845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.557143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.557174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.562074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.562371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.562402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.567243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.567540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.567580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.572327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.572646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.572675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.577619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.577934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.577963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.582985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.583300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.583331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.588256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.588579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.588612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.593792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.594092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.594123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.599089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.599387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.599418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.045 [2024-07-15 22:48:47.604458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.045 [2024-07-15 22:48:47.604767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.045 [2024-07-15 22:48:47.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.303 [2024-07-15 22:48:47.609747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.303 [2024-07-15 22:48:47.610049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.303 [2024-07-15 22:48:47.610079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.303 [2024-07-15 22:48:47.614884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.303 [2024-07-15 22:48:47.615186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.303 [2024-07-15 22:48:47.615217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.303 [2024-07-15 22:48:47.620134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.303 [2024-07-15 22:48:47.620437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.303 [2024-07-15 22:48:47.620468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.303 [2024-07-15 22:48:47.625270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.303 [2024-07-15 22:48:47.625610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.303 [2024-07-15 22:48:47.625659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.630569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.630900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.630929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.635805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.636104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.636134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.640884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.641199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.641230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.645997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.646297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.646327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.651192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.651483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.651519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.656373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.656677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.656708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.661581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.661951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.661983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.666845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.667162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.667194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.672025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.672361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.672393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.677210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.677503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.677534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.682398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.682772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.682802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.687770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.688116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.688148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.693015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.693350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.693382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.698278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.698641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.698693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.703784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.704087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.704120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.708817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.709116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.709148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.713913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.714223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.714254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.719222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.719516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.719548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.724466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.724773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.724805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.729740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.730081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.730113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.734986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.735324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.735356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.740199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.740514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.740546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.745469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.745815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.745845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.750732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.751071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.751103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.755961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.756311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.756343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.761144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.761466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.761497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.766197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.766544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.766583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.771406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.771716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.771747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.776473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.304 [2024-07-15 22:48:47.776779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.304 [2024-07-15 22:48:47.776810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.304 [2024-07-15 22:48:47.781716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.782088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.786973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.787310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.787342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.792245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.792562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.792604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.797515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.797891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.797921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.802702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.803035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.803066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.807968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.808306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.813244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.813546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.813591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.818741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.819095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.819126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.823962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.824255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.824294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.829058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.829354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.829385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.834274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.834569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.834610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.839605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.839919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.839950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.844924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.845217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.845249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.850221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.850549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.855522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.855873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.855904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.860912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.861207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.861239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.305 [2024-07-15 22:48:47.866204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.305 [2024-07-15 22:48:47.866495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.305 [2024-07-15 22:48:47.866527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.871405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.871719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.871750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.876525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.876831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.876864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.881727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.882039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.882071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.886816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.887142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.887173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.892012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.892314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.892336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.897022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.897316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.897348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.902120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.902419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.902451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.907194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.907487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.907518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.912330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.564 [2024-07-15 22:48:47.912649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.564 [2024-07-15 22:48:47.912676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.564 [2024-07-15 22:48:47.917395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.917714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.917736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.922462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.922768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.922799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.927716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.928009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.928039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.932867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.933160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.933191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.938081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.938377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.938408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.943382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.943755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.943784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.948743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.949035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.953897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.954196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.954227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.959042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.959350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.959381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.964298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.964612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.964642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.969388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.969695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.974475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.974787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.974818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.979791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.980103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.980133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.985032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.985323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.990245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.990540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.990580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:47.995500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:47.995828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:47.995858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.000641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:48.000935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:48.000965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.005881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:48.006201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:48.006232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.011016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:48.011316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:48.011347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.016076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:48.016383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:48.016416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.021317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:48.021611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:48.021654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.026473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.565 [2024-07-15 22:48:48.026821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.565 [2024-07-15 22:48:48.026852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.565 [2024-07-15 22:48:48.031758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.032050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.032081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.037031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.037338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.037374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.042314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.042652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.042682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.047445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.047790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.052748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.053098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.053129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.058100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.058392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.058423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.063335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.063678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.063708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.068577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.068900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.068929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.073881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.074172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.074202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.079107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.079399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.079431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.084182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.084490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.084521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.089324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.089632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.089661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.094781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.095113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.095143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.100036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.100342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.100373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.105360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.105782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:32.566 [2024-07-15 22:48:48.110767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13a8c10) with pdu=0x2000190fef90 00:18:32.566 [2024-07-15 22:48:48.110956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.566 [2024-07-15 22:48:48.111013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.566 00:18:32.566 Latency(us) 00:18:32.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:32.566 nvme0n1 : 2.00 5964.21 745.53 0.00 0.00 2676.15 1534.14 5689.72 00:18:32.566 =================================================================================================================== 00:18:32.566 Total : 5964.21 745.53 0.00 0.00 2676.15 1534.14 5689.72 00:18:32.566 0 00:18:32.824 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:32.824 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:32.824 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:32.824 | .driver_specific 00:18:32.824 | .nvme_error 00:18:32.824 | .status_code 00:18:32.824 | .command_transient_transport_error' 00:18:32.824 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80753 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80753 ']' 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80753 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80753 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:33.084 killing process with pid 80753 00:18:33.084 Received shutdown signal, test time was about 2.000000 seconds 00:18:33.084 00:18:33.084 Latency(us) 00:18:33.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.084 =================================================================================================================== 00:18:33.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80753' 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80753 00:18:33.084 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80753 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80547 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80547 ']' 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80547 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80547 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:33.345 killing process with pid 80547 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80547' 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80547 00:18:33.345 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80547 00:18:33.603 00:18:33.603 real 0m18.548s 00:18:33.603 user 0m35.979s 00:18:33.603 sys 0m4.752s 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.603 ************************************ 00:18:33.603 END TEST nvmf_digest_error 00:18:33.603 ************************************ 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:33.603 22:48:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.603 rmmod nvme_tcp 00:18:33.603 rmmod nvme_fabrics 00:18:33.603 rmmod nvme_keyring 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80547 ']' 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80547 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80547 ']' 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80547 00:18:33.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80547) - No such process 00:18:33.603 Process with pid 80547 is not found 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80547 is not found' 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:33.603 00:18:33.603 real 0m38.153s 00:18:33.603 user 1m12.816s 00:18:33.603 sys 0m9.820s 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.603 22:48:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:33.603 ************************************ 00:18:33.603 END TEST nvmf_digest 00:18:33.603 ************************************ 00:18:33.862 22:48:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:33.862 22:48:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:33.862 22:48:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:33.862 22:48:49 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:33.862 22:48:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:33.862 22:48:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.862 22:48:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.862 ************************************ 00:18:33.862 START TEST nvmf_host_multipath 00:18:33.862 ************************************ 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:33.862 * Looking for test storage... 00:18:33.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:33.862 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:33.862 Cannot find device "nvmf_tgt_br" 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.863 Cannot find device "nvmf_tgt_br2" 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:33.863 Cannot find device "nvmf_tgt_br" 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:33.863 Cannot find device "nvmf_tgt_br2" 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:33.863 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:18:34.122 00:18:34.122 --- 10.0.0.2 ping statistics --- 00:18:34.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.122 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:34.122 00:18:34.122 --- 10.0.0.3 ping statistics --- 00:18:34.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.122 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:34.122 00:18:34.122 --- 10.0.0.1 ping statistics --- 00:18:34.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.122 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.122 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81019 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81019 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81019 ']' 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.381 22:48:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:34.381 [2024-07-15 22:48:49.766069] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:18:34.381 [2024-07-15 22:48:49.766160] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.381 [2024-07-15 22:48:49.905932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:34.639 [2024-07-15 22:48:50.014279] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.639 [2024-07-15 22:48:50.014347] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.639 [2024-07-15 22:48:50.014358] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.640 [2024-07-15 22:48:50.014367] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.640 [2024-07-15 22:48:50.014375] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.640 [2024-07-15 22:48:50.014867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.640 [2024-07-15 22:48:50.014869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.640 [2024-07-15 22:48:50.069917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81019 00:18:35.576 22:48:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:35.576 [2024-07-15 22:48:51.042644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.576 22:48:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:35.835 Malloc0 00:18:35.835 22:48:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:36.094 22:48:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:36.352 22:48:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.610 [2024-07-15 22:48:52.069094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.611 22:48:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:36.868 [2024-07-15 22:48:52.293217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81069 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81069 /var/tmp/bdevperf.sock 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81069 ']' 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.868 22:48:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:37.804 22:48:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.804 22:48:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:37.804 22:48:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:38.063 22:48:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:38.322 Nvme0n1 00:18:38.322 22:48:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:38.581 Nvme0n1 00:18:38.581 22:48:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:38.581 22:48:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.957 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:39.957 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:39.957 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:40.216 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:40.216 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:40.216 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81114 00:18:40.216 22:48:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:46.782 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:46.782 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:46.782 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.783 Attaching 4 probes... 00:18:46.783 @path[10.0.0.2, 4421]: 17321 00:18:46.783 @path[10.0.0.2, 4421]: 17808 00:18:46.783 @path[10.0.0.2, 4421]: 17885 00:18:46.783 @path[10.0.0.2, 4421]: 17696 00:18:46.783 @path[10.0.0.2, 4421]: 17604 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81114 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:46.783 22:49:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:46.783 22:49:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:47.042 22:49:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:47.042 22:49:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81227 00:18:47.042 22:49:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:47.042 22:49:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.591 Attaching 4 probes... 00:18:53.591 @path[10.0.0.2, 4420]: 17918 00:18:53.591 @path[10.0.0.2, 4420]: 18252 00:18:53.591 @path[10.0.0.2, 4420]: 18062 00:18:53.591 @path[10.0.0.2, 4420]: 18161 00:18:53.591 @path[10.0.0.2, 4420]: 18109 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81227 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:53.591 22:49:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:53.848 22:49:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:53.848 22:49:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:53.848 22:49:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81345 00:18:53.848 22:49:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.498 Attaching 4 probes... 00:19:00.498 @path[10.0.0.2, 4421]: 12586 00:19:00.498 @path[10.0.0.2, 4421]: 17296 00:19:00.498 @path[10.0.0.2, 4421]: 16899 00:19:00.498 @path[10.0.0.2, 4421]: 17009 00:19:00.498 @path[10.0.0.2, 4421]: 16832 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81345 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:00.498 22:49:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:00.757 22:49:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:00.757 22:49:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81457 00:19:00.757 22:49:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:00.757 22:49:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.318 Attaching 4 probes... 00:19:07.318 00:19:07.318 00:19:07.318 00:19:07.318 00:19:07.318 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81457 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:07.318 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:07.577 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:07.577 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.577 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81570 00:19:07.577 22:49:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:14.184 22:49:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:14.184 22:49:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.184 Attaching 4 probes... 00:19:14.184 @path[10.0.0.2, 4421]: 17518 00:19:14.184 @path[10.0.0.2, 4421]: 17861 00:19:14.184 @path[10.0.0.2, 4421]: 16924 00:19:14.184 @path[10.0.0.2, 4421]: 16062 00:19:14.184 @path[10.0.0.2, 4421]: 16127 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81570 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:14.184 22:49:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:15.118 22:49:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:15.118 22:49:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81699 00:19:15.118 22:49:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:15.118 22:49:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:21.691 Attaching 4 probes... 00:19:21.691 @path[10.0.0.2, 4420]: 14685 00:19:21.691 @path[10.0.0.2, 4420]: 14956 00:19:21.691 @path[10.0.0.2, 4420]: 15264 00:19:21.691 @path[10.0.0.2, 4420]: 17221 00:19:21.691 @path[10.0.0.2, 4420]: 17688 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81699 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:21.691 22:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:21.691 [2024-07-15 22:49:37.080135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:21.691 22:49:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:21.950 22:49:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:28.521 22:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:28.521 22:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81868 00:19:28.521 22:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:28.521 22:49:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81019 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:33.833 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:33.833 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:34.091 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:34.091 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:34.091 Attaching 4 probes... 00:19:34.091 @path[10.0.0.2, 4421]: 16947 00:19:34.091 @path[10.0.0.2, 4421]: 16044 00:19:34.091 @path[10.0.0.2, 4421]: 15831 00:19:34.091 @path[10.0.0.2, 4421]: 15974 00:19:34.091 @path[10.0.0.2, 4421]: 16016 00:19:34.091 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:34.091 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:34.091 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81868 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81069 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81069 ']' 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81069 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81069 00:19:34.352 killing process with pid 81069 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81069' 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81069 00:19:34.352 22:49:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81069 00:19:34.352 Connection closed with partial response: 00:19:34.352 00:19:34.352 00:19:34.620 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81069 00:19:34.620 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:34.620 [2024-07-15 22:48:52.356163] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:19:34.620 [2024-07-15 22:48:52.356257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81069 ] 00:19:34.620 [2024-07-15 22:48:52.485617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.620 [2024-07-15 22:48:52.599160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.620 [2024-07-15 22:48:52.654524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:34.620 Running I/O for 90 seconds... 00:19:34.620 [2024-07-15 22:49:02.406821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.406903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.620 [2024-07-15 22:49:02.407729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.620 [2024-07-15 22:49:02.407765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.620 [2024-07-15 22:49:02.407800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:34.620 [2024-07-15 22:49:02.407821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.407844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.407867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.407882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.407903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.407919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.407942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.407957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.407979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.407993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.408490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.408979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.408999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.409013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.409051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.621 [2024-07-15 22:49:02.409087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.621 [2024-07-15 22:49:02.409390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:34.621 [2024-07-15 22:49:02.409412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.409701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.409975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.409990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.622 [2024-07-15 22:49:02.410756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.410791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.410828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.410864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.410900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.410936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:34.622 [2024-07-15 22:49:02.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.622 [2024-07-15 22:49:02.410977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.410999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.411298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.411312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.412864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:02.412897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.412928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.412945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.412967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.412982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:02.413379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:02.413395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.623 [2024-07-15 22:49:08.953446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:34.623 [2024-07-15 22:49:08.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.623 [2024-07-15 22:49:08.953709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.953974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.953994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.624 [2024-07-15 22:49:08.954360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:34.624 [2024-07-15 22:49:08.954574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.624 [2024-07-15 22:49:08.954592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.954975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.954990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.625 [2024-07-15 22:49:08.955243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:34.625 [2024-07-15 22:49:08.955539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.625 [2024-07-15 22:49:08.955553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.955861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.955896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.955932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.955966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.955987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.956002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.956036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.956071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.956183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.626 [2024-07-15 22:49:08.956223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:34.626 [2024-07-15 22:49:08.956732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.626 [2024-07-15 22:49:08.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.956768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.956782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.956817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.956831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.956851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.956866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.956886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.956900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.956920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.956948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.956968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.956981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.957030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.957286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.957978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.958004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.627 [2024-07-15 22:49:08.958680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.627 [2024-07-15 22:49:08.958723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:34.627 [2024-07-15 22:49:08.958753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.958768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:08.958798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.958812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:08.958842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.958856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:08.958886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.958901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:08.958930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.958945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:08.958974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.958989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:08.959019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:08.959041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.628 [2024-07-15 22:49:16.045709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.045968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.045982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.046021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:34.628 [2024-07-15 22:49:16.046042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.628 [2024-07-15 22:49:16.046056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.046974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.046989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.047010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.047024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.047046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.629 [2024-07-15 22:49:16.047060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.047085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.629 [2024-07-15 22:49:16.047109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:34.629 [2024-07-15 22:49:16.047132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.630 [2024-07-15 22:49:16.047379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.047976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.048011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.048040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.048055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.048076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.048091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.048112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.048128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.048150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.048164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.048186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.630 [2024-07-15 22:49:16.048200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:34.630 [2024-07-15 22:49:16.048221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.048285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.631 [2024-07-15 22:49:16.048913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.048949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.048970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.048991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.049014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.049029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.049051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.049065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.049086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.049101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.049122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.631 [2024-07-15 22:49:16.049137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:34.631 [2024-07-15 22:49:16.049158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:16.049172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.049880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:16.049907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.049942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.049959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.049988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:16.050857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:16.050873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.537838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.537933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.632 [2024-07-15 22:49:29.538306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.632 [2024-07-15 22:49:29.538667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.632 [2024-07-15 22:49:29.538694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.538949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.538964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.633 [2024-07-15 22:49:29.539954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.539983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.539998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.540011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.540026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.540039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.633 [2024-07-15 22:49:29.540058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.633 [2024-07-15 22:49:29.540071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.540717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.540980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.540993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.634 [2024-07-15 22:49:29.541190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.541218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.541246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.541281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.541314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.541343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.634 [2024-07-15 22:49:29.541371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.634 [2024-07-15 22:49:29.541386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.635 [2024-07-15 22:49:29.541679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1237490 is same with the state(5) to be set 00:19:34.635 [2024-07-15 22:49:29.541711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.541722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.541732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74544 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.541747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.541774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.541785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.541799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.541829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.541841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.541855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.541878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.541889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75080 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.541902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.541926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.541937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.541950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.541965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.541976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.541987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75128 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75136 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75144 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75160 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75168 len:8 PRP1 0x0 PRP2 0x0 00:19:34.635 [2024-07-15 22:49:29.542451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.635 [2024-07-15 22:49:29.542470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.635 [2024-07-15 22:49:29.542481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.635 [2024-07-15 22:49:29.542491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75176 len:8 PRP1 0x0 PRP2 0x0 00:19:34.636 [2024-07-15 22:49:29.542505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.636 [2024-07-15 22:49:29.542530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.636 [2024-07-15 22:49:29.542540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75184 len:8 PRP1 0x0 PRP2 0x0 00:19:34.636 [2024-07-15 22:49:29.542554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542632] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1237490 was disconnected and freed. reset controller. 00:19:34.636 [2024-07-15 22:49:29.542754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.636 [2024-07-15 22:49:29.542785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.636 [2024-07-15 22:49:29.542814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.636 [2024-07-15 22:49:29.542842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.636 [2024-07-15 22:49:29.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.636 [2024-07-15 22:49:29.542919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.636 [2024-07-15 22:49:29.542939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11afc80 is same with the state(5) to be set 00:19:34.636 [2024-07-15 22:49:29.544173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:34.636 [2024-07-15 22:49:29.544213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11afc80 (9): Bad file descriptor 00:19:34.636 [2024-07-15 22:49:29.544608] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:34.636 [2024-07-15 22:49:29.544639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11afc80 with addr=10.0.0.2, port=4421 00:19:34.636 [2024-07-15 22:49:29.544657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11afc80 is same with the state(5) to be set 00:19:34.636 [2024-07-15 22:49:29.544822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11afc80 (9): Bad file descriptor 00:19:34.636 [2024-07-15 22:49:29.544890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:34.636 [2024-07-15 22:49:29.544923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:34.636 [2024-07-15 22:49:29.544939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:34.636 [2024-07-15 22:49:29.544972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:34.636 [2024-07-15 22:49:29.544988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:34.636 [2024-07-15 22:49:39.613208] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:34.636 Received shutdown signal, test time was about 55.474881 seconds 00:19:34.636 00:19:34.636 Latency(us) 00:19:34.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.636 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:34.636 Verification LBA range: start 0x0 length 0x4000 00:19:34.636 Nvme0n1 : 55.47 7292.15 28.48 0.00 0.00 17520.42 184.32 7046430.72 00:19:34.636 =================================================================================================================== 00:19:34.636 Total : 7292.15 28.48 0.00 0.00 17520.42 184.32 7046430.72 00:19:34.636 22:49:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:34.895 rmmod nvme_tcp 00:19:34.895 rmmod nvme_fabrics 00:19:34.895 rmmod nvme_keyring 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81019 ']' 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81019 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81019 ']' 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81019 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81019 00:19:34.895 killing process with pid 81019 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81019' 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81019 00:19:34.895 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81019 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:35.154 00:19:35.154 real 1m1.529s 00:19:35.154 user 2m50.394s 00:19:35.154 sys 0m18.356s 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.154 ************************************ 00:19:35.154 END TEST nvmf_host_multipath 00:19:35.154 22:49:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:35.154 ************************************ 00:19:35.413 22:49:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.413 22:49:50 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:35.413 22:49:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.413 22:49:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.413 22:49:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.413 ************************************ 00:19:35.413 START TEST nvmf_timeout 00:19:35.413 ************************************ 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:35.413 * Looking for test storage... 00:19:35.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.413 22:49:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:35.414 Cannot find device "nvmf_tgt_br" 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.414 Cannot find device "nvmf_tgt_br2" 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:35.414 Cannot find device "nvmf_tgt_br" 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:35.414 Cannot find device "nvmf_tgt_br2" 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:35.414 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:35.673 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:35.673 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.673 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:35.673 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.673 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:35.673 22:49:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:35.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:19:35.673 00:19:35.673 --- 10.0.0.2 ping statistics --- 00:19:35.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.673 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:35.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:35.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:35.673 00:19:35.673 --- 10.0.0.3 ping statistics --- 00:19:35.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.673 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:35.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:35.673 00:19:35.673 --- 10.0.0.1 ping statistics --- 00:19:35.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.673 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82179 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82179 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82179 ']' 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.673 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.674 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.674 22:49:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:35.933 [2024-07-15 22:49:51.284586] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:19:35.933 [2024-07-15 22:49:51.284699] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.933 [2024-07-15 22:49:51.421915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:36.192 [2024-07-15 22:49:51.570193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.192 [2024-07-15 22:49:51.570258] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.192 [2024-07-15 22:49:51.570269] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.192 [2024-07-15 22:49:51.570278] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.192 [2024-07-15 22:49:51.570286] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.192 [2024-07-15 22:49:51.570674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.192 [2024-07-15 22:49:51.570970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.192 [2024-07-15 22:49:51.641783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.759 22:49:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.017 [2024-07-15 22:49:52.553339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.017 22:49:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:37.293 Malloc0 00:19:37.553 22:49:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:37.553 22:49:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:37.812 22:49:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.072 [2024-07-15 22:49:53.569258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82227 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82227 /var/tmp/bdevperf.sock 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82227 ']' 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.072 22:49:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:38.330 [2024-07-15 22:49:53.647748] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:19:38.330 [2024-07-15 22:49:53.647850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82227 ] 00:19:38.330 [2024-07-15 22:49:53.789476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.589 [2024-07-15 22:49:53.934673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.589 [2024-07-15 22:49:53.995775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:39.155 22:49:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.155 22:49:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:39.155 22:49:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:39.412 22:49:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:39.670 NVMe0n1 00:19:39.670 22:49:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82252 00:19:39.670 22:49:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.670 22:49:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:40.007 Running I/O for 10 seconds... 00:19:40.598 22:49:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.858 [2024-07-15 22:49:56.407278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.858 [2024-07-15 22:49:56.407515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.858 [2024-07-15 22:49:56.407547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.858 [2024-07-15 22:49:56.407571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.858 [2024-07-15 22:49:56.407582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.407981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.859 [2024-07-15 22:49:56.408276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.859 [2024-07-15 22:49:56.408431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.859 [2024-07-15 22:49:56.408441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.408802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.408979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.408992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.860 [2024-07-15 22:49:56.409347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.409370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.409390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.860 [2024-07-15 22:49:56.409405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.860 [2024-07-15 22:49:56.409414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.861 [2024-07-15 22:49:56.409707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.409988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.409997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.861 [2024-07-15 22:49:56.410017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d7100 is same with the state(5) to be set 00:19:40.861 [2024-07-15 22:49:56.410048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70240 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70568 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70576 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70584 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70592 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70600 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70608 len:8 PRP1 0x0 PRP2 0x0 00:19:40.861 [2024-07-15 22:49:56.410301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.861 [2024-07-15 22:49:56.410310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.861 [2024-07-15 22:49:56.410317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.861 [2024-07-15 22:49:56.410324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70616 len:8 PRP1 0x0 PRP2 0x0 00:19:40.862 [2024-07-15 22:49:56.410332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.862 [2024-07-15 22:49:56.410341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.862 [2024-07-15 22:49:56.410348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.862 [2024-07-15 22:49:56.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70624 len:8 PRP1 0x0 PRP2 0x0 00:19:40.862 [2024-07-15 22:49:56.410364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.862 [2024-07-15 22:49:56.410426] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7d7100 was disconnected and freed. reset controller. 00:19:40.862 [2024-07-15 22:49:56.410550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.862 [2024-07-15 22:49:56.410581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.862 [2024-07-15 22:49:56.410594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.862 [2024-07-15 22:49:56.410603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.862 [2024-07-15 22:49:56.410613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.862 [2024-07-15 22:49:56.410622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.862 [2024-07-15 22:49:56.410632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.862 [2024-07-15 22:49:56.410641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.862 [2024-07-15 22:49:56.410656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7868b0 is same with the state(5) to be set 00:19:40.862 [2024-07-15 22:49:56.410873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.862 [2024-07-15 22:49:56.410896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7868b0 (9): Bad file descriptor 00:19:40.862 [2024-07-15 22:49:56.411002] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.862 [2024-07-15 22:49:56.411024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7868b0 with addr=10.0.0.2, port=4420 00:19:40.862 [2024-07-15 22:49:56.411036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7868b0 is same with the state(5) to be set 00:19:40.862 [2024-07-15 22:49:56.411054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7868b0 (9): Bad file descriptor 00:19:40.862 [2024-07-15 22:49:56.411070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.862 [2024-07-15 22:49:56.411079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.862 [2024-07-15 22:49:56.411089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.862 [2024-07-15 22:49:56.411109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.862 [2024-07-15 22:49:56.411119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.125 22:49:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:43.041 [2024-07-15 22:49:58.411478] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.041 [2024-07-15 22:49:58.411592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7868b0 with addr=10.0.0.2, port=4420 00:19:43.041 [2024-07-15 22:49:58.411612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7868b0 is same with the state(5) to be set 00:19:43.041 [2024-07-15 22:49:58.411642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7868b0 (9): Bad file descriptor 00:19:43.041 [2024-07-15 22:49:58.411662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.041 [2024-07-15 22:49:58.411672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.042 [2024-07-15 22:49:58.411683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.042 [2024-07-15 22:49:58.411711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.042 [2024-07-15 22:49:58.411723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.042 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:43.042 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.042 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:43.300 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:43.300 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:43.300 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:43.300 22:49:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:43.557 22:49:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:43.557 22:49:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:44.932 [2024-07-15 22:50:00.411919] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.932 [2024-07-15 22:50:00.412002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7868b0 with addr=10.0.0.2, port=4420 00:19:44.932 [2024-07-15 22:50:00.412028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7868b0 is same with the state(5) to be set 00:19:44.932 [2024-07-15 22:50:00.412057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7868b0 (9): Bad file descriptor 00:19:44.932 [2024-07-15 22:50:00.412089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.932 [2024-07-15 22:50:00.412101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:44.932 [2024-07-15 22:50:00.412111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.932 [2024-07-15 22:50:00.412139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.932 [2024-07-15 22:50:00.412151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.462 [2024-07-15 22:50:02.412254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.462 [2024-07-15 22:50:02.412349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.462 [2024-07-15 22:50:02.412363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:47.462 [2024-07-15 22:50:02.412374] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:47.462 [2024-07-15 22:50:02.412404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.029 00:19:48.029 Latency(us) 00:19:48.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.029 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.029 Verification LBA range: start 0x0 length 0x4000 00:19:48.029 NVMe0n1 : 8.14 1068.93 4.18 15.72 0.00 117822.39 3351.27 7015926.69 00:19:48.029 =================================================================================================================== 00:19:48.029 Total : 1068.93 4.18 15.72 0.00 117822.39 3351.27 7015926.69 00:19:48.029 0 00:19:48.596 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:48.596 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.596 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:48.854 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:48.854 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:48.854 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:48.854 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82252 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82227 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82227 ']' 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82227 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82227 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:49.212 killing process with pid 82227 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82227' 00:19:49.212 Received shutdown signal, test time was about 9.350923 seconds 00:19:49.212 00:19:49.212 Latency(us) 00:19:49.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.212 =================================================================================================================== 00:19:49.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82227 00:19:49.212 22:50:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82227 00:19:49.470 22:50:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.730 [2024-07-15 22:50:05.077431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82368 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82368 /var/tmp/bdevperf.sock 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82368 ']' 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.730 22:50:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:49.730 [2024-07-15 22:50:05.147292] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:19:49.730 [2024-07-15 22:50:05.147397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82368 ] 00:19:49.730 [2024-07-15 22:50:05.277430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.988 [2024-07-15 22:50:05.398189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.988 [2024-07-15 22:50:05.454605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:50.555 22:50:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.556 22:50:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:50.556 22:50:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:51.124 22:50:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:51.124 NVMe0n1 00:19:51.382 22:50:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82392 00:19:51.382 22:50:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.382 22:50:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:51.382 Running I/O for 10 seconds... 00:19:52.314 22:50:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.574 [2024-07-15 22:50:07.955190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93800 is same with the state(5) to be set 00:19:52.574 [2024-07-15 22:50:07.955475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.574 [2024-07-15 22:50:07.955514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.574 [2024-07-15 22:50:07.955537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.574 [2024-07-15 22:50:07.955549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.574 [2024-07-15 22:50:07.955584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.574 [2024-07-15 22:50:07.955597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.574 [2024-07-15 22:50:07.955610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.574 [2024-07-15 22:50:07.955619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.574 [2024-07-15 22:50:07.955631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.574 [2024-07-15 22:50:07.955641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.574 [2024-07-15 22:50:07.955652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.574 [2024-07-15 22:50:07.955662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.574 [2024-07-15 22:50:07.955673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.574 [2024-07-15 22:50:07.955683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.955704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.955725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.955746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.955987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.955998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.575 [2024-07-15 22:50:07.956305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.575 [2024-07-15 22:50:07.956653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.575 [2024-07-15 22:50:07.956665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.956842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.956982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.956992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.576 [2024-07-15 22:50:07.957363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.576 [2024-07-15 22:50:07.957604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.576 [2024-07-15 22:50:07.957615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.577 [2024-07-15 22:50:07.957823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.577 [2024-07-15 22:50:07.957979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.957989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1523100 is same with the state(5) to be set 00:19:52.577 [2024-07-15 22:50:07.958006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67632 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68096 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68104 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68112 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68120 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68128 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68136 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68144 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68152 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.577 [2024-07-15 22:50:07.958539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.577 [2024-07-15 22:50:07.958547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68160 len:8 PRP1 0x0 PRP2 0x0 00:19:52.577 [2024-07-15 22:50:07.958556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.577 [2024-07-15 22:50:07.958576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.578 [2024-07-15 22:50:07.958584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.578 [2024-07-15 22:50:07.958594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68168 len:8 PRP1 0x0 PRP2 0x0 00:19:52.578 [2024-07-15 22:50:07.958603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.578 [2024-07-15 22:50:07.958618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.578 [2024-07-15 22:50:07.958626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.578 [2024-07-15 22:50:07.958634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68176 len:8 PRP1 0x0 PRP2 0x0 00:19:52.578 [2024-07-15 22:50:07.958643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.578 [2024-07-15 22:50:07.958653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.578 [2024-07-15 22:50:07.958660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.578 [2024-07-15 22:50:07.958669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68184 len:8 PRP1 0x0 PRP2 0x0 00:19:52.578 [2024-07-15 22:50:07.958678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.578 [2024-07-15 22:50:07.958688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.578 [2024-07-15 22:50:07.958696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.578 [2024-07-15 22:50:07.958704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68192 len:8 PRP1 0x0 PRP2 0x0 00:19:52.578 [2024-07-15 22:50:07.958713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.578 [2024-07-15 22:50:07.958769] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1523100 was disconnected and freed. reset controller. 00:19:52.578 [2024-07-15 22:50:07.959029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.578 [2024-07-15 22:50:07.959131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:19:52.578 [2024-07-15 22:50:07.959240] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.578 [2024-07-15 22:50:07.959262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d28b0 with addr=10.0.0.2, port=4420 00:19:52.578 [2024-07-15 22:50:07.959273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:19:52.578 [2024-07-15 22:50:07.959292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:19:52.578 [2024-07-15 22:50:07.959309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.578 [2024-07-15 22:50:07.959319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.578 [2024-07-15 22:50:07.959330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.578 [2024-07-15 22:50:07.959350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.578 [2024-07-15 22:50:07.959362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.578 22:50:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:53.513 [2024-07-15 22:50:08.959548] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.513 [2024-07-15 22:50:08.959667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d28b0 with addr=10.0.0.2, port=4420 00:19:53.513 [2024-07-15 22:50:08.959685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:19:53.513 [2024-07-15 22:50:08.959713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:19:53.513 [2024-07-15 22:50:08.959733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.513 [2024-07-15 22:50:08.959744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.513 [2024-07-15 22:50:08.959755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.513 [2024-07-15 22:50:08.959782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.513 [2024-07-15 22:50:08.959795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.513 22:50:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.772 [2024-07-15 22:50:09.249705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.772 22:50:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82392 00:19:54.708 [2024-07-15 22:50:09.972912] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:01.266 00:20:01.266 Latency(us) 00:20:01.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.266 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.266 Verification LBA range: start 0x0 length 0x4000 00:20:01.266 NVMe0n1 : 10.01 6383.05 24.93 0.00 0.00 20013.05 1362.85 3019898.88 00:20:01.266 =================================================================================================================== 00:20:01.266 Total : 6383.05 24.93 0.00 0.00 20013.05 1362.85 3019898.88 00:20:01.266 0 00:20:01.266 22:50:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82502 00:20:01.266 22:50:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.266 22:50:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:01.523 Running I/O for 10 seconds... 00:20:02.453 22:50:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.714 [2024-07-15 22:50:18.071464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with t[2024-07-15 22:50:18.071473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.714 he state(5) to be set 00:20:02.714 [2024-07-15 22:50:18.071534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.714 [2024-07-15 22:50:18.071536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.714 [2024-07-15 22:50:18.071562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.714 [2024-07-15 22:50:18.071567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.714 [2024-07-15 22:50:18.071578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.714 [2024-07-15 22:50:18.071573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.715 [2024-07-15 22:50:18.071594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.715 [2024-07-15 22:50:18.071604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.715 [2024-07-15 22:50:18.071613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.715 [2024-07-15 22:50:18.071639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.071993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.715 [2024-07-15 22:50:18.072300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc947b0 is same with the state(5) to be set 00:20:02.716 [2024-07-15 22:50:18.072532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.072979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.072988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.716 [2024-07-15 22:50:18.073284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.716 [2024-07-15 22:50:18.073294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.073986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.073997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.717 [2024-07-15 22:50:18.074228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.717 [2024-07-15 22:50:18.074237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.074986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.074996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.718 [2024-07-15 22:50:18.075017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.718 [2024-07-15 22:50:18.075159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.718 [2024-07-15 22:50:18.075168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.719 [2024-07-15 22:50:18.075345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.719 [2024-07-15 22:50:18.075367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15530e0 is same with the state(5) to be set 00:20:02.719 [2024-07-15 22:50:18.075388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.719 [2024-07-15 22:50:18.075396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.719 [2024-07-15 22:50:18.075405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:20:02.719 [2024-07-15 22:50:18.075414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.719 [2024-07-15 22:50:18.075468] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15530e0 was disconnected and freed. reset controller. 00:20:02.719 [2024-07-15 22:50:18.075716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.719 [2024-07-15 22:50:18.075741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:20:02.719 [2024-07-15 22:50:18.075841] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:02.719 [2024-07-15 22:50:18.075864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d28b0 with addr=10.0.0.2, port=4420 00:20:02.719 [2024-07-15 22:50:18.075876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:20:02.719 [2024-07-15 22:50:18.075895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:20:02.719 [2024-07-15 22:50:18.075911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:02.719 [2024-07-15 22:50:18.075920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:02.719 [2024-07-15 22:50:18.075931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.719 [2024-07-15 22:50:18.075950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:02.719 [2024-07-15 22:50:18.075961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.719 22:50:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:03.650 [2024-07-15 22:50:19.076104] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.650 [2024-07-15 22:50:19.076197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d28b0 with addr=10.0.0.2, port=4420 00:20:03.650 [2024-07-15 22:50:19.076216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:20:03.650 [2024-07-15 22:50:19.076242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:20:03.651 [2024-07-15 22:50:19.076281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.651 [2024-07-15 22:50:19.076292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:03.651 [2024-07-15 22:50:19.076303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.651 [2024-07-15 22:50:19.076330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.651 [2024-07-15 22:50:19.076343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.588 [2024-07-15 22:50:20.076511] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.588 [2024-07-15 22:50:20.076594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d28b0 with addr=10.0.0.2, port=4420 00:20:04.588 [2024-07-15 22:50:20.076614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:20:04.588 [2024-07-15 22:50:20.076642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:20:04.588 [2024-07-15 22:50:20.076662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.588 [2024-07-15 22:50:20.076672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.588 [2024-07-15 22:50:20.076684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.588 [2024-07-15 22:50:20.076712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.588 [2024-07-15 22:50:20.076724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.533 [2024-07-15 22:50:21.080475] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.533 [2024-07-15 22:50:21.080551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d28b0 with addr=10.0.0.2, port=4420 00:20:05.533 [2024-07-15 22:50:21.080581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d28b0 is same with the state(5) to be set 00:20:05.533 [2024-07-15 22:50:21.080833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d28b0 (9): Bad file descriptor 00:20:05.533 [2024-07-15 22:50:21.081078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.533 [2024-07-15 22:50:21.081100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:05.533 [2024-07-15 22:50:21.081113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.533 [2024-07-15 22:50:21.084952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.533 [2024-07-15 22:50:21.084988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.533 22:50:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.793 [2024-07-15 22:50:21.304181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.793 22:50:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82502 00:20:06.728 [2024-07-15 22:50:22.125800] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:11.990 00:20:11.990 Latency(us) 00:20:11.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.990 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:11.990 Verification LBA range: start 0x0 length 0x4000 00:20:11.990 NVMe0n1 : 10.01 5494.46 21.46 3667.63 0.00 13933.87 700.04 3019898.88 00:20:11.990 =================================================================================================================== 00:20:11.990 Total : 5494.46 21.46 3667.63 0.00 13933.87 0.00 3019898.88 00:20:11.990 0 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82368 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82368 ']' 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82368 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82368 00:20:11.990 killing process with pid 82368 00:20:11.990 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.990 00:20:11.990 Latency(us) 00:20:11.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.990 =================================================================================================================== 00:20:11.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82368' 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82368 00:20:11.990 22:50:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82368 00:20:11.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82611 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82611 /var/tmp/bdevperf.sock 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82611 ']' 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.990 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:11.990 [2024-07-15 22:50:27.273252] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:20:11.990 [2024-07-15 22:50:27.273349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82611 ] 00:20:11.990 [2024-07-15 22:50:27.404918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.991 [2024-07-15 22:50:27.515557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.248 [2024-07-15 22:50:27.570314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:12.248 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.248 22:50:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:12.248 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82611 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:12.248 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82625 00:20:12.248 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:12.506 22:50:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:12.764 NVMe0n1 00:20:12.765 22:50:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.765 22:50:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82661 00:20:12.765 22:50:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:12.765 Running I/O for 10 seconds... 00:20:13.697 22:50:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.957 [2024-07-15 22:50:29.432421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.957 [2024-07-15 22:50:29.432507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.957 [2024-07-15 22:50:29.432531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.957 [2024-07-15 22:50:29.432554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.957 [2024-07-15 22:50:29.432590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.957 [2024-07-15 22:50:29.432611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.957 [2024-07-15 22:50:29.432633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.957 [2024-07-15 22:50:29.432643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.432979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.432990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.958 [2024-07-15 22:50:29.433549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.958 [2024-07-15 22:50:29.433568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.433987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.433997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.959 [2024-07-15 22:50:29.434542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.959 [2024-07-15 22:50:29.434554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.434980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.434990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.960 [2024-07-15 22:50:29.435383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab21d0 is same with the state(5) to be set 00:20:13.960 [2024-07-15 22:50:29.435406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:13.960 [2024-07-15 22:50:29.435413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:13.960 [2024-07-15 22:50:29.435422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102584 len:8 PRP1 0x0 PRP2 0x0 00:20:13.960 [2024-07-15 22:50:29.435436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.960 [2024-07-15 22:50:29.435490] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab21d0 was disconnected and freed. reset controller. 00:20:13.960 [2024-07-15 22:50:29.435795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.960 [2024-07-15 22:50:29.435899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a61770 (9): Bad file descriptor 00:20:13.960 [2024-07-15 22:50:29.436057] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.960 [2024-07-15 22:50:29.436082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a61770 with addr=10.0.0.2, port=4420 00:20:13.960 [2024-07-15 22:50:29.436094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61770 is same with the state(5) to be set 00:20:13.960 [2024-07-15 22:50:29.436118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a61770 (9): Bad file descriptor 00:20:13.960 [2024-07-15 22:50:29.436143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.960 [2024-07-15 22:50:29.436154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.961 [2024-07-15 22:50:29.436170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.961 [2024-07-15 22:50:29.436198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.961 [2024-07-15 22:50:29.436211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:13.961 22:50:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82661 00:20:16.488 [2024-07-15 22:50:31.436395] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.488 [2024-07-15 22:50:31.436458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a61770 with addr=10.0.0.2, port=4420 00:20:16.488 [2024-07-15 22:50:31.436475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61770 is same with the state(5) to be set 00:20:16.488 [2024-07-15 22:50:31.436501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a61770 (9): Bad file descriptor 00:20:16.488 [2024-07-15 22:50:31.436521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.488 [2024-07-15 22:50:31.436531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:16.488 [2024-07-15 22:50:31.436542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.488 [2024-07-15 22:50:31.436587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.488 [2024-07-15 22:50:31.436602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:18.389 [2024-07-15 22:50:33.436852] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.389 [2024-07-15 22:50:33.436927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a61770 with addr=10.0.0.2, port=4420 00:20:18.389 [2024-07-15 22:50:33.436945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61770 is same with the state(5) to be set 00:20:18.389 [2024-07-15 22:50:33.436973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a61770 (9): Bad file descriptor 00:20:18.389 [2024-07-15 22:50:33.436992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.389 [2024-07-15 22:50:33.437003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:18.389 [2024-07-15 22:50:33.437015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.389 [2024-07-15 22:50:33.437042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.389 [2024-07-15 22:50:33.437053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.288 [2024-07-15 22:50:35.437227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.288 [2024-07-15 22:50:35.437310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.288 [2024-07-15 22:50:35.437324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.288 [2024-07-15 22:50:35.437334] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:20.288 [2024-07-15 22:50:35.437361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.221 00:20:21.221 Latency(us) 00:20:21.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.221 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:21.221 NVMe0n1 : 8.13 2014.45 7.87 15.74 0.00 62993.86 8281.37 7015926.69 00:20:21.221 =================================================================================================================== 00:20:21.221 Total : 2014.45 7.87 15.74 0.00 62993.86 8281.37 7015926.69 00:20:21.221 0 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:21.221 Attaching 5 probes... 00:20:21.221 1323.813772: reset bdev controller NVMe0 00:20:21.221 1323.997192: reconnect bdev controller NVMe0 00:20:21.221 3324.281686: reconnect delay bdev controller NVMe0 00:20:21.221 3324.325373: reconnect bdev controller NVMe0 00:20:21.221 5324.735466: reconnect delay bdev controller NVMe0 00:20:21.221 5324.759759: reconnect bdev controller NVMe0 00:20:21.221 7325.227165: reconnect delay bdev controller NVMe0 00:20:21.221 7325.253534: reconnect bdev controller NVMe0 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82625 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82611 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82611 ']' 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82611 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82611 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:21.221 killing process with pid 82611 00:20:21.221 Received shutdown signal, test time was about 8.196527 seconds 00:20:21.221 00:20:21.221 Latency(us) 00:20:21.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.221 =================================================================================================================== 00:20:21.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82611' 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82611 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82611 00:20:21.221 22:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.477 22:50:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:21.477 22:50:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:21.477 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.477 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.734 rmmod nvme_tcp 00:20:21.734 rmmod nvme_fabrics 00:20:21.734 rmmod nvme_keyring 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82179 ']' 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82179 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82179 ']' 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82179 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82179 00:20:21.734 killing process with pid 82179 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82179' 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82179 00:20:21.734 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82179 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:21.992 00:20:21.992 real 0m46.702s 00:20:21.992 user 2m16.815s 00:20:21.992 sys 0m5.774s 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.992 22:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:21.992 ************************************ 00:20:21.992 END TEST nvmf_timeout 00:20:21.992 ************************************ 00:20:21.992 22:50:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:21.992 22:50:37 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:21.992 22:50:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:21.992 22:50:37 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.992 22:50:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.992 22:50:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:22.249 00:20:22.249 real 12m23.145s 00:20:22.249 user 30m11.244s 00:20:22.249 sys 3m4.902s 00:20:22.249 22:50:37 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.249 22:50:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.249 ************************************ 00:20:22.249 END TEST nvmf_tcp 00:20:22.249 ************************************ 00:20:22.249 22:50:37 -- common/autotest_common.sh@1142 -- # return 0 00:20:22.249 22:50:37 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:22.249 22:50:37 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:22.249 22:50:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:22.249 22:50:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.249 22:50:37 -- common/autotest_common.sh@10 -- # set +x 00:20:22.250 ************************************ 00:20:22.250 START TEST nvmf_dif 00:20:22.250 ************************************ 00:20:22.250 22:50:37 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:22.250 * Looking for test storage... 00:20:22.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:22.250 22:50:37 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.250 22:50:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.250 22:50:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.250 22:50:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.250 22:50:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.250 22:50:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.250 22:50:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.250 22:50:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:22.250 22:50:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.250 22:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:22.250 22:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:22.250 22:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:22.250 22:50:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:22.250 22:50:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.250 22:50:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:22.250 22:50:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:22.250 Cannot find device "nvmf_tgt_br" 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.250 Cannot find device "nvmf_tgt_br2" 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:22.250 Cannot find device "nvmf_tgt_br" 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:22.250 Cannot find device "nvmf_tgt_br2" 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:22.250 22:50:37 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.508 22:50:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:22.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:22.508 00:20:22.508 --- 10.0.0.2 ping statistics --- 00:20:22.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.508 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:22.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:22.508 00:20:22.508 --- 10.0.0.3 ping statistics --- 00:20:22.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.508 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:22.508 00:20:22.508 --- 10.0.0.1 ping statistics --- 00:20:22.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.508 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:22.508 22:50:38 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.074 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.074 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.074 22:50:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:23.074 22:50:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83093 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:23.074 22:50:38 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83093 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83093 ']' 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.074 22:50:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.074 [2024-07-15 22:50:38.511290] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:20:23.074 [2024-07-15 22:50:38.511398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.332 [2024-07-15 22:50:38.653077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.332 [2024-07-15 22:50:38.767197] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.332 [2024-07-15 22:50:38.767254] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.332 [2024-07-15 22:50:38.767268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.332 [2024-07-15 22:50:38.767278] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.332 [2024-07-15 22:50:38.767287] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.332 [2024-07-15 22:50:38.767329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.332 [2024-07-15 22:50:38.823961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:23.897 22:50:39 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.897 22:50:39 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:23.897 22:50:39 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.897 22:50:39 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.897 22:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 22:50:39 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.156 22:50:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:24.156 22:50:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:24.156 22:50:39 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.156 22:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 [2024-07-15 22:50:39.499311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.156 22:50:39 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.156 22:50:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:24.156 22:50:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.156 22:50:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.156 22:50:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 ************************************ 00:20:24.156 START TEST fio_dif_1_default 00:20:24.156 ************************************ 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 bdev_null0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 [2024-07-15 22:50:39.543390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.156 { 00:20:24.156 "params": { 00:20:24.156 "name": "Nvme$subsystem", 00:20:24.156 "trtype": "$TEST_TRANSPORT", 00:20:24.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.156 "adrfam": "ipv4", 00:20:24.156 "trsvcid": "$NVMF_PORT", 00:20:24.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.156 "hdgst": ${hdgst:-false}, 00:20:24.156 "ddgst": ${ddgst:-false} 00:20:24.156 }, 00:20:24.156 "method": "bdev_nvme_attach_controller" 00:20:24.156 } 00:20:24.156 EOF 00:20:24.156 )") 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:24.156 22:50:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:24.156 "params": { 00:20:24.156 "name": "Nvme0", 00:20:24.157 "trtype": "tcp", 00:20:24.157 "traddr": "10.0.0.2", 00:20:24.157 "adrfam": "ipv4", 00:20:24.157 "trsvcid": "4420", 00:20:24.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.157 "hdgst": false, 00:20:24.157 "ddgst": false 00:20:24.157 }, 00:20:24.157 "method": "bdev_nvme_attach_controller" 00:20:24.157 }' 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.157 22:50:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.416 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:24.416 fio-3.35 00:20:24.416 Starting 1 thread 00:20:36.611 00:20:36.611 filename0: (groupid=0, jobs=1): err= 0: pid=83165: Mon Jul 15 22:50:50 2024 00:20:36.611 read: IOPS=8796, BW=34.4MiB/s (36.0MB/s)(344MiB/10001msec) 00:20:36.611 slat (nsec): min=6418, max=70850, avg=8691.17, stdev=3329.27 00:20:36.611 clat (usec): min=339, max=1604, avg=429.09, stdev=34.11 00:20:36.612 lat (usec): min=346, max=1614, avg=437.79, stdev=34.89 00:20:36.612 clat percentiles (usec): 00:20:36.612 | 1.00th=[ 359], 5.00th=[ 375], 10.00th=[ 388], 20.00th=[ 404], 00:20:36.612 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 437], 00:20:36.612 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 469], 95.00th=[ 482], 00:20:36.612 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 562], 99.95th=[ 594], 00:20:36.612 | 99.99th=[ 1172] 00:20:36.612 bw ( KiB/s): min=34432, max=37152, per=100.00%, avg=35216.84, stdev=663.94, samples=19 00:20:36.612 iops : min= 8608, max= 9288, avg=8804.21, stdev=165.99, samples=19 00:20:36.612 lat (usec) : 500=98.35%, 750=1.61%, 1000=0.02% 00:20:36.612 lat (msec) : 2=0.01% 00:20:36.612 cpu : usr=84.57%, sys=13.47%, ctx=23, majf=0, minf=0 00:20:36.612 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.612 issued rwts: total=87976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.612 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:36.612 00:20:36.612 Run status group 0 (all jobs): 00:20:36.612 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=344MiB (360MB), run=10001-10001msec 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 00:20:36.612 real 0m11.019s 00:20:36.612 user 0m9.097s 00:20:36.612 sys 0m1.628s 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.612 ************************************ 00:20:36.612 END TEST fio_dif_1_default 00:20:36.612 ************************************ 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:36.612 22:50:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:36.612 22:50:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:36.612 22:50:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 ************************************ 00:20:36.612 START TEST fio_dif_1_multi_subsystems 00:20:36.612 ************************************ 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 bdev_null0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 [2024-07-15 22:50:50.617086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 bdev_null1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.612 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.612 { 00:20:36.612 "params": { 00:20:36.612 "name": "Nvme$subsystem", 00:20:36.613 "trtype": "$TEST_TRANSPORT", 00:20:36.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.613 "adrfam": "ipv4", 00:20:36.613 "trsvcid": "$NVMF_PORT", 00:20:36.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.613 "hdgst": ${hdgst:-false}, 00:20:36.613 "ddgst": ${ddgst:-false} 00:20:36.613 }, 00:20:36.613 "method": "bdev_nvme_attach_controller" 00:20:36.613 } 00:20:36.613 EOF 00:20:36.613 )") 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.613 { 00:20:36.613 "params": { 00:20:36.613 "name": "Nvme$subsystem", 00:20:36.613 "trtype": "$TEST_TRANSPORT", 00:20:36.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.613 "adrfam": "ipv4", 00:20:36.613 "trsvcid": "$NVMF_PORT", 00:20:36.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.613 "hdgst": ${hdgst:-false}, 00:20:36.613 "ddgst": ${ddgst:-false} 00:20:36.613 }, 00:20:36.613 "method": "bdev_nvme_attach_controller" 00:20:36.613 } 00:20:36.613 EOF 00:20:36.613 )") 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:36.613 "params": { 00:20:36.613 "name": "Nvme0", 00:20:36.613 "trtype": "tcp", 00:20:36.613 "traddr": "10.0.0.2", 00:20:36.613 "adrfam": "ipv4", 00:20:36.613 "trsvcid": "4420", 00:20:36.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.613 "hdgst": false, 00:20:36.613 "ddgst": false 00:20:36.613 }, 00:20:36.613 "method": "bdev_nvme_attach_controller" 00:20:36.613 },{ 00:20:36.613 "params": { 00:20:36.613 "name": "Nvme1", 00:20:36.613 "trtype": "tcp", 00:20:36.613 "traddr": "10.0.0.2", 00:20:36.613 "adrfam": "ipv4", 00:20:36.613 "trsvcid": "4420", 00:20:36.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.613 "hdgst": false, 00:20:36.613 "ddgst": false 00:20:36.613 }, 00:20:36.613 "method": "bdev_nvme_attach_controller" 00:20:36.613 }' 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.613 22:50:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.613 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:36.613 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:36.613 fio-3.35 00:20:36.613 Starting 2 threads 00:20:46.578 00:20:46.578 filename0: (groupid=0, jobs=1): err= 0: pid=83324: Mon Jul 15 22:51:01 2024 00:20:46.578 read: IOPS=4771, BW=18.6MiB/s (19.5MB/s)(186MiB/10001msec) 00:20:46.578 slat (nsec): min=6856, max=72903, avg=13715.54, stdev=4440.64 00:20:46.578 clat (usec): min=595, max=1719, avg=800.55, stdev=50.45 00:20:46.578 lat (usec): min=604, max=1748, avg=814.27, stdev=51.56 00:20:46.578 clat percentiles (usec): 00:20:46.578 | 1.00th=[ 676], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 758], 00:20:46.578 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 816], 00:20:46.578 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 881], 00:20:46.578 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 979], 00:20:46.578 | 99.99th=[ 1221] 00:20:46.578 bw ( KiB/s): min=18336, max=19424, per=50.04%, avg=19105.68, stdev=292.21, samples=19 00:20:46.578 iops : min= 4584, max= 4856, avg=4776.42, stdev=73.05, samples=19 00:20:46.578 lat (usec) : 750=15.33%, 1000=84.64% 00:20:46.578 lat (msec) : 2=0.03% 00:20:46.578 cpu : usr=90.43%, sys=8.06%, ctx=20, majf=0, minf=0 00:20:46.578 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.578 issued rwts: total=47720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.578 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:46.578 filename1: (groupid=0, jobs=1): err= 0: pid=83325: Mon Jul 15 22:51:01 2024 00:20:46.578 read: IOPS=4772, BW=18.6MiB/s (19.5MB/s)(186MiB/10001msec) 00:20:46.578 slat (nsec): min=6818, max=77770, avg=13863.36, stdev=4569.72 00:20:46.578 clat (usec): min=412, max=1049, avg=799.45, stdev=41.72 00:20:46.578 lat (usec): min=419, max=1063, avg=813.32, stdev=42.33 00:20:46.578 clat percentiles (usec): 00:20:46.578 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 766], 00:20:46.578 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:20:46.578 | 70.00th=[ 824], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 873], 00:20:46.578 | 99.00th=[ 906], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 963], 00:20:46.578 | 99.99th=[ 1004] 00:20:46.578 bw ( KiB/s): min=18336, max=19424, per=50.05%, avg=19109.74, stdev=292.29, samples=19 00:20:46.578 iops : min= 4584, max= 4856, avg=4777.42, stdev=73.08, samples=19 00:20:46.578 lat (usec) : 500=0.01%, 750=10.98%, 1000=89.00% 00:20:46.578 lat (msec) : 2=0.01% 00:20:46.578 cpu : usr=89.46%, sys=9.05%, ctx=10, majf=0, minf=0 00:20:46.578 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.578 issued rwts: total=47732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.578 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:46.578 00:20:46.578 Run status group 0 (all jobs): 00:20:46.578 READ: bw=37.3MiB/s (39.1MB/s), 18.6MiB/s-18.6MiB/s (19.5MB/s-19.5MB/s), io=373MiB (391MB), run=10001-10001msec 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 00:20:46.578 real 0m11.147s 00:20:46.578 user 0m18.759s 00:20:46.578 sys 0m2.037s 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 ************************************ 00:20:46.578 END TEST fio_dif_1_multi_subsystems 00:20:46.578 ************************************ 00:20:46.578 22:51:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:46.578 22:51:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:46.578 22:51:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:46.578 22:51:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 ************************************ 00:20:46.578 START TEST fio_dif_rand_params 00:20:46.578 ************************************ 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 bdev_null0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.578 [2024-07-15 22:51:01.824421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.578 { 00:20:46.578 "params": { 00:20:46.578 "name": "Nvme$subsystem", 00:20:46.578 "trtype": "$TEST_TRANSPORT", 00:20:46.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.578 "adrfam": "ipv4", 00:20:46.578 "trsvcid": "$NVMF_PORT", 00:20:46.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.578 "hdgst": ${hdgst:-false}, 00:20:46.578 "ddgst": ${ddgst:-false} 00:20:46.578 }, 00:20:46.578 "method": "bdev_nvme_attach_controller" 00:20:46.578 } 00:20:46.578 EOF 00:20:46.578 )") 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:46.578 "params": { 00:20:46.578 "name": "Nvme0", 00:20:46.578 "trtype": "tcp", 00:20:46.578 "traddr": "10.0.0.2", 00:20:46.578 "adrfam": "ipv4", 00:20:46.578 "trsvcid": "4420", 00:20:46.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:46.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:46.578 "hdgst": false, 00:20:46.578 "ddgst": false 00:20:46.578 }, 00:20:46.578 "method": "bdev_nvme_attach_controller" 00:20:46.578 }' 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:46.578 22:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.578 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:46.578 ... 00:20:46.578 fio-3.35 00:20:46.578 Starting 3 threads 00:20:53.141 00:20:53.141 filename0: (groupid=0, jobs=1): err= 0: pid=83481: Mon Jul 15 22:51:07 2024 00:20:53.141 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(162MiB/5003msec) 00:20:53.141 slat (nsec): min=7291, max=39698, avg=10288.00, stdev=3676.11 00:20:53.141 clat (usec): min=11194, max=13799, avg=11528.54, stdev=164.71 00:20:53.141 lat (usec): min=11202, max=13832, avg=11538.83, stdev=165.15 00:20:53.141 clat percentiles (usec): 00:20:53.141 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:53.141 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:20:53.141 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:20:53.141 | 99.00th=[11863], 99.50th=[11994], 99.90th=[13829], 99.95th=[13829], 00:20:53.141 | 99.99th=[13829] 00:20:53.141 bw ( KiB/s): min=33024, max=33792, per=33.29%, avg=33194.67, stdev=338.66, samples=9 00:20:53.141 iops : min= 258, max= 264, avg=259.33, stdev= 2.65, samples=9 00:20:53.141 lat (msec) : 20=100.00% 00:20:53.141 cpu : usr=91.28%, sys=8.14%, ctx=13, majf=0, minf=9 00:20:53.141 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.141 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:53.141 filename0: (groupid=0, jobs=1): err= 0: pid=83482: Mon Jul 15 22:51:07 2024 00:20:53.141 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(162MiB/5001msec) 00:20:53.141 slat (nsec): min=7049, max=55873, avg=14901.34, stdev=3952.10 00:20:53.141 clat (usec): min=9929, max=13737, avg=11515.23, stdev=180.76 00:20:53.141 lat (usec): min=9942, max=13756, avg=11530.13, stdev=180.85 00:20:53.141 clat percentiles (usec): 00:20:53.141 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:53.141 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:53.141 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:20:53.141 | 99.00th=[11863], 99.50th=[11994], 99.90th=[13698], 99.95th=[13698], 00:20:53.141 | 99.99th=[13698] 00:20:53.141 bw ( KiB/s): min=33024, max=33792, per=33.29%, avg=33194.67, stdev=338.66, samples=9 00:20:53.141 iops : min= 258, max= 264, avg=259.33, stdev= 2.65, samples=9 00:20:53.141 lat (msec) : 10=0.23%, 20=99.77% 00:20:53.141 cpu : usr=90.94%, sys=8.48%, ctx=54, majf=0, minf=9 00:20:53.141 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.141 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:53.141 filename0: (groupid=0, jobs=1): err= 0: pid=83483: Mon Jul 15 22:51:07 2024 00:20:53.141 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(162MiB/5001msec) 00:20:53.141 slat (nsec): min=7957, max=44411, avg=14180.29, stdev=3256.61 00:20:53.141 clat (usec): min=9929, max=13746, avg=11518.10, stdev=180.60 00:20:53.141 lat (usec): min=9942, max=13771, avg=11532.28, stdev=180.80 00:20:53.141 clat percentiles (usec): 00:20:53.141 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:53.141 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:20:53.141 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:20:53.141 | 99.00th=[11863], 99.50th=[11994], 99.90th=[13698], 99.95th=[13698], 00:20:53.141 | 99.99th=[13698] 00:20:53.141 bw ( KiB/s): min=33024, max=33792, per=33.29%, avg=33194.67, stdev=338.66, samples=9 00:20:53.141 iops : min= 258, max= 264, avg=259.33, stdev= 2.65, samples=9 00:20:53.141 lat (msec) : 10=0.23%, 20=99.77% 00:20:53.141 cpu : usr=91.96%, sys=7.46%, ctx=8, majf=0, minf=9 00:20:53.141 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.141 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:53.141 00:20:53.141 Run status group 0 (all jobs): 00:20:53.141 READ: bw=97.4MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=487MiB (511MB), run=5001-5003msec 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 bdev_null0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 [2024-07-15 22:51:07.825562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 bdev_null1 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 bdev_null2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.142 { 00:20:53.142 "params": { 00:20:53.142 "name": "Nvme$subsystem", 00:20:53.142 "trtype": "$TEST_TRANSPORT", 00:20:53.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.142 "adrfam": "ipv4", 00:20:53.142 "trsvcid": "$NVMF_PORT", 00:20:53.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.142 "hdgst": ${hdgst:-false}, 00:20:53.142 "ddgst": ${ddgst:-false} 00:20:53.142 }, 00:20:53.142 "method": "bdev_nvme_attach_controller" 00:20:53.142 } 00:20:53.142 EOF 00:20:53.142 )") 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.142 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.142 { 00:20:53.142 "params": { 00:20:53.142 "name": "Nvme$subsystem", 00:20:53.142 "trtype": "$TEST_TRANSPORT", 00:20:53.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.142 "adrfam": "ipv4", 00:20:53.142 "trsvcid": "$NVMF_PORT", 00:20:53.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.142 "hdgst": ${hdgst:-false}, 00:20:53.142 "ddgst": ${ddgst:-false} 00:20:53.142 }, 00:20:53.142 "method": "bdev_nvme_attach_controller" 00:20:53.142 } 00:20:53.143 EOF 00:20:53.143 )") 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.143 { 00:20:53.143 "params": { 00:20:53.143 "name": "Nvme$subsystem", 00:20:53.143 "trtype": "$TEST_TRANSPORT", 00:20:53.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.143 "adrfam": "ipv4", 00:20:53.143 "trsvcid": "$NVMF_PORT", 00:20:53.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.143 "hdgst": ${hdgst:-false}, 00:20:53.143 "ddgst": ${ddgst:-false} 00:20:53.143 }, 00:20:53.143 "method": "bdev_nvme_attach_controller" 00:20:53.143 } 00:20:53.143 EOF 00:20:53.143 )") 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:53.143 "params": { 00:20:53.143 "name": "Nvme0", 00:20:53.143 "trtype": "tcp", 00:20:53.143 "traddr": "10.0.0.2", 00:20:53.143 "adrfam": "ipv4", 00:20:53.143 "trsvcid": "4420", 00:20:53.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:53.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:53.143 "hdgst": false, 00:20:53.143 "ddgst": false 00:20:53.143 }, 00:20:53.143 "method": "bdev_nvme_attach_controller" 00:20:53.143 },{ 00:20:53.143 "params": { 00:20:53.143 "name": "Nvme1", 00:20:53.143 "trtype": "tcp", 00:20:53.143 "traddr": "10.0.0.2", 00:20:53.143 "adrfam": "ipv4", 00:20:53.143 "trsvcid": "4420", 00:20:53.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.143 "hdgst": false, 00:20:53.143 "ddgst": false 00:20:53.143 }, 00:20:53.143 "method": "bdev_nvme_attach_controller" 00:20:53.143 },{ 00:20:53.143 "params": { 00:20:53.143 "name": "Nvme2", 00:20:53.143 "trtype": "tcp", 00:20:53.143 "traddr": "10.0.0.2", 00:20:53.143 "adrfam": "ipv4", 00:20:53.143 "trsvcid": "4420", 00:20:53.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.143 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.143 "hdgst": false, 00:20:53.143 "ddgst": false 00:20:53.143 }, 00:20:53.143 "method": "bdev_nvme_attach_controller" 00:20:53.143 }' 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:53.143 22:51:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.143 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:53.143 ... 00:20:53.143 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:53.143 ... 00:20:53.143 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:53.143 ... 00:20:53.143 fio-3.35 00:20:53.143 Starting 24 threads 00:21:05.355 00:21:05.355 filename0: (groupid=0, jobs=1): err= 0: pid=83578: Mon Jul 15 22:51:18 2024 00:21:05.355 read: IOPS=209, BW=839KiB/s (859kB/s)(8424KiB/10043msec) 00:21:05.355 slat (usec): min=7, max=8020, avg=19.57, stdev=179.97 00:21:05.355 clat (msec): min=22, max=167, avg=76.14, stdev=19.95 00:21:05.355 lat (msec): min=22, max=167, avg=76.16, stdev=19.95 00:21:05.355 clat percentiles (msec): 00:21:05.355 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:21:05.355 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:21:05.355 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 110], 00:21:05.355 | 99.00th=[ 125], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.355 | 99.99th=[ 169] 00:21:05.355 bw ( KiB/s): min= 656, max= 1064, per=4.20%, avg=835.45, stdev=99.26, samples=20 00:21:05.355 iops : min= 164, max= 266, avg=208.80, stdev=24.85, samples=20 00:21:05.355 lat (msec) : 50=10.78%, 100=78.54%, 250=10.68% 00:21:05.355 cpu : usr=32.99%, sys=2.03%, ctx=934, majf=0, minf=9 00:21:05.355 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=81.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:05.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.355 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.355 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.355 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.355 filename0: (groupid=0, jobs=1): err= 0: pid=83579: Mon Jul 15 22:51:18 2024 00:21:05.355 read: IOPS=190, BW=764KiB/s (782kB/s)(7668KiB/10043msec) 00:21:05.355 slat (usec): min=7, max=6027, avg=26.27, stdev=239.46 00:21:05.355 clat (msec): min=2, max=170, avg=83.47, stdev=26.59 00:21:05.355 lat (msec): min=2, max=170, avg=83.50, stdev=26.60 00:21:05.355 clat percentiles (msec): 00:21:05.355 | 1.00th=[ 4], 5.00th=[ 42], 10.00th=[ 55], 20.00th=[ 67], 00:21:05.355 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 90], 00:21:05.355 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 120], 00:21:05.355 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 171], 00:21:05.355 | 99.99th=[ 171] 00:21:05.355 bw ( KiB/s): min= 512, max= 1651, per=3.84%, avg=763.00, stdev=234.05, samples=20 00:21:05.355 iops : min= 128, max= 412, avg=190.65, stdev=58.36, samples=20 00:21:05.355 lat (msec) : 4=1.67%, 10=1.67%, 20=0.73%, 50=2.97%, 100=68.34% 00:21:05.355 lat (msec) : 250=24.62% 00:21:05.355 cpu : usr=45.72%, sys=2.66%, ctx=1333, majf=0, minf=0 00:21:05.355 IO depths : 1=0.2%, 2=5.0%, 4=19.5%, 8=62.0%, 16=13.4%, 32=0.0%, >=64=0.0% 00:21:05.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.355 complete : 0=0.0%, 4=92.9%, 8=2.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename0: (groupid=0, jobs=1): err= 0: pid=83580: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=201, BW=805KiB/s (824kB/s)(8068KiB/10028msec) 00:21:05.356 slat (usec): min=7, max=8025, avg=30.61, stdev=308.89 00:21:05.356 clat (msec): min=27, max=163, avg=79.33, stdev=20.29 00:21:05.356 lat (msec): min=27, max=163, avg=79.36, stdev=20.31 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 53], 20.00th=[ 61], 00:21:05.356 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 83], 00:21:05.356 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 116], 00:21:05.356 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 153], 00:21:05.356 | 99.99th=[ 163] 00:21:05.356 bw ( KiB/s): min= 634, max= 1024, per=4.03%, avg=802.10, stdev=102.76, samples=20 00:21:05.356 iops : min= 158, max= 256, avg=200.50, stdev=25.73, samples=20 00:21:05.356 lat (msec) : 50=6.49%, 100=77.49%, 250=16.01% 00:21:05.356 cpu : usr=42.75%, sys=2.37%, ctx=1203, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=71.8%, 16=14.3%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=90.0%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename0: (groupid=0, jobs=1): err= 0: pid=83581: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=217, BW=871KiB/s (892kB/s)(8720KiB/10013msec) 00:21:05.356 slat (usec): min=7, max=8023, avg=17.79, stdev=171.62 00:21:05.356 clat (msec): min=20, max=142, avg=73.41, stdev=20.23 00:21:05.356 lat (msec): min=20, max=142, avg=73.43, stdev=20.24 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 56], 00:21:05.356 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:21:05.356 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 101], 95.00th=[ 109], 00:21:05.356 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 142], 00:21:05.356 | 99.99th=[ 142] 00:21:05.356 bw ( KiB/s): min= 720, max= 1024, per=4.36%, avg=868.05, stdev=84.05, samples=20 00:21:05.356 iops : min= 180, max= 256, avg=217.00, stdev=21.03, samples=20 00:21:05.356 lat (msec) : 50=12.80%, 100=77.02%, 250=10.18% 00:21:05.356 cpu : usr=36.71%, sys=1.89%, ctx=1065, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename0: (groupid=0, jobs=1): err= 0: pid=83582: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=200, BW=803KiB/s (822kB/s)(8040KiB/10010msec) 00:21:05.356 slat (nsec): min=7881, max=58580, avg=14005.18, stdev=5108.06 00:21:05.356 clat (msec): min=14, max=155, avg=79.59, stdev=22.36 00:21:05.356 lat (msec): min=14, max=155, avg=79.61, stdev=22.36 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 60], 00:21:05.356 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:21:05.356 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:21:05.356 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 157], 00:21:05.356 | 99.99th=[ 157] 00:21:05.356 bw ( KiB/s): min= 544, max= 1024, per=3.98%, avg=792.21, stdev=117.52, samples=19 00:21:05.356 iops : min= 136, max= 256, avg=198.05, stdev=29.38, samples=19 00:21:05.356 lat (msec) : 20=0.30%, 50=10.15%, 100=73.58%, 250=15.97% 00:21:05.356 cpu : usr=31.51%, sys=1.67%, ctx=969, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=2.4%, 4=9.9%, 8=73.2%, 16=14.5%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=89.7%, 8=8.1%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename0: (groupid=0, jobs=1): err= 0: pid=83583: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=205, BW=821KiB/s (841kB/s)(8244KiB/10038msec) 00:21:05.356 slat (usec): min=5, max=4028, avg=16.32, stdev=88.84 00:21:05.356 clat (msec): min=13, max=153, avg=77.77, stdev=21.74 00:21:05.356 lat (msec): min=13, max=153, avg=77.79, stdev=21.74 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 57], 00:21:05.356 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 82], 00:21:05.356 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:21:05.356 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.356 | 99.99th=[ 155] 00:21:05.356 bw ( KiB/s): min= 622, max= 1152, per=4.12%, avg=819.80, stdev=115.12, samples=20 00:21:05.356 iops : min= 155, max= 288, avg=204.90, stdev=28.82, samples=20 00:21:05.356 lat (msec) : 20=0.68%, 50=8.49%, 100=76.03%, 250=14.80% 00:21:05.356 cpu : usr=43.43%, sys=2.32%, ctx=1361, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=88.9%, 8=9.8%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename0: (groupid=0, jobs=1): err= 0: pid=83584: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=216, BW=864KiB/s (885kB/s)(8660KiB/10019msec) 00:21:05.356 slat (usec): min=3, max=7024, avg=17.24, stdev=150.76 00:21:05.356 clat (msec): min=28, max=143, avg=73.92, stdev=20.55 00:21:05.356 lat (msec): min=28, max=143, avg=73.94, stdev=20.55 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:21:05.356 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:21:05.356 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 114], 00:21:05.356 | 99.00th=[ 127], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.356 | 99.99th=[ 144] 00:21:05.356 bw ( KiB/s): min= 744, max= 1080, per=4.33%, avg=861.65, stdev=82.10, samples=20 00:21:05.356 iops : min= 186, max= 270, avg=215.40, stdev=20.53, samples=20 00:21:05.356 lat (msec) : 50=15.06%, 100=73.39%, 250=11.55% 00:21:05.356 cpu : usr=32.81%, sys=1.45%, ctx=1205, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename0: (groupid=0, jobs=1): err= 0: pid=83585: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=193, BW=776KiB/s (794kB/s)(7776KiB/10024msec) 00:21:05.356 slat (usec): min=7, max=8027, avg=25.54, stdev=314.66 00:21:05.356 clat (msec): min=29, max=147, avg=82.32, stdev=21.28 00:21:05.356 lat (msec): min=29, max=147, avg=82.34, stdev=21.29 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:21:05.356 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:21:05.356 | 70.00th=[ 93], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 121], 00:21:05.356 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:21:05.356 | 99.99th=[ 148] 00:21:05.356 bw ( KiB/s): min= 634, max= 968, per=3.88%, avg=772.90, stdev=106.48, samples=20 00:21:05.356 iops : min= 158, max= 242, avg=193.20, stdev=26.65, samples=20 00:21:05.356 lat (msec) : 50=7.25%, 100=73.10%, 250=19.65% 00:21:05.356 cpu : usr=36.11%, sys=1.89%, ctx=1075, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=3.5%, 4=13.9%, 8=68.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=91.0%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename1: (groupid=0, jobs=1): err= 0: pid=83586: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=214, BW=857KiB/s (878kB/s)(8604KiB/10038msec) 00:21:05.356 slat (usec): min=8, max=4026, avg=19.77, stdev=149.74 00:21:05.356 clat (msec): min=13, max=147, avg=74.49, stdev=21.43 00:21:05.356 lat (msec): min=13, max=147, avg=74.51, stdev=21.44 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 56], 00:21:05.356 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 80], 00:21:05.356 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 116], 00:21:05.356 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:21:05.356 | 99.99th=[ 148] 00:21:05.356 bw ( KiB/s): min= 606, max= 1024, per=4.30%, avg=856.25, stdev=96.12, samples=20 00:21:05.356 iops : min= 151, max= 256, avg=214.00, stdev=24.12, samples=20 00:21:05.356 lat (msec) : 20=0.65%, 50=11.85%, 100=76.29%, 250=11.20% 00:21:05.356 cpu : usr=42.15%, sys=2.49%, ctx=1186, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.356 filename1: (groupid=0, jobs=1): err= 0: pid=83587: Mon Jul 15 22:51:18 2024 00:21:05.356 read: IOPS=204, BW=817KiB/s (836kB/s)(8200KiB/10040msec) 00:21:05.356 slat (usec): min=7, max=8040, avg=21.69, stdev=198.27 00:21:05.356 clat (msec): min=14, max=156, avg=78.18, stdev=21.39 00:21:05.356 lat (msec): min=14, max=156, avg=78.20, stdev=21.39 00:21:05.356 clat percentiles (msec): 00:21:05.356 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 59], 00:21:05.356 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 83], 00:21:05.356 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:21:05.356 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 157], 00:21:05.356 | 99.99th=[ 157] 00:21:05.356 bw ( KiB/s): min= 638, max= 1152, per=4.09%, avg=813.05, stdev=123.56, samples=20 00:21:05.356 iops : min= 159, max= 288, avg=203.20, stdev=30.94, samples=20 00:21:05.356 lat (msec) : 20=0.68%, 50=7.27%, 100=78.34%, 250=13.71% 00:21:05.356 cpu : usr=36.98%, sys=2.25%, ctx=1270, majf=0, minf=9 00:21:05.356 IO depths : 1=0.1%, 2=1.9%, 4=7.4%, 8=75.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:05.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.356 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.356 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename1: (groupid=0, jobs=1): err= 0: pid=83588: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=216, BW=865KiB/s (886kB/s)(8692KiB/10048msec) 00:21:05.357 slat (usec): min=4, max=8027, avg=19.77, stdev=215.06 00:21:05.357 clat (msec): min=2, max=154, avg=73.81, stdev=24.07 00:21:05.357 lat (msec): min=2, max=154, avg=73.83, stdev=24.07 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:21:05.357 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 82], 00:21:05.357 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 110], 00:21:05.357 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.357 | 99.99th=[ 155] 00:21:05.357 bw ( KiB/s): min= 712, max= 1728, per=4.34%, avg=864.00, stdev=216.41, samples=20 00:21:05.357 iops : min= 178, max= 432, avg=215.95, stdev=54.11, samples=20 00:21:05.357 lat (msec) : 4=2.21%, 10=1.38%, 20=0.83%, 50=8.61%, 100=75.29% 00:21:05.357 lat (msec) : 250=11.69% 00:21:05.357 cpu : usr=33.77%, sys=1.79%, ctx=1191, majf=0, minf=0 00:21:05.357 IO depths : 1=0.2%, 2=0.7%, 4=2.2%, 8=80.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename1: (groupid=0, jobs=1): err= 0: pid=83589: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=213, BW=856KiB/s (876kB/s)(8596KiB/10043msec) 00:21:05.357 slat (usec): min=7, max=8024, avg=21.32, stdev=244.33 00:21:05.357 clat (msec): min=15, max=167, avg=74.60, stdev=19.95 00:21:05.357 lat (msec): min=15, max=167, avg=74.62, stdev=19.95 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:21:05.357 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:21:05.357 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 109], 00:21:05.357 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:21:05.357 | 99.99th=[ 169] 00:21:05.357 bw ( KiB/s): min= 688, max= 1088, per=4.30%, avg=855.05, stdev=92.49, samples=20 00:21:05.357 iops : min= 172, max= 272, avg=213.70, stdev=23.15, samples=20 00:21:05.357 lat (msec) : 20=0.65%, 50=11.45%, 100=78.32%, 250=9.59% 00:21:05.357 cpu : usr=31.45%, sys=1.73%, ctx=1003, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename1: (groupid=0, jobs=1): err= 0: pid=83590: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=206, BW=826KiB/s (846kB/s)(8292KiB/10038msec) 00:21:05.357 slat (usec): min=4, max=8026, avg=25.31, stdev=304.66 00:21:05.357 clat (msec): min=34, max=168, avg=77.32, stdev=19.20 00:21:05.357 lat (msec): min=34, max=168, avg=77.35, stdev=19.21 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:21:05.357 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:21:05.357 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 111], 00:21:05.357 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.357 | 99.99th=[ 169] 00:21:05.357 bw ( KiB/s): min= 656, max= 1008, per=4.13%, avg=822.25, stdev=79.69, samples=20 00:21:05.357 iops : min= 164, max= 252, avg=205.50, stdev=19.95, samples=20 00:21:05.357 lat (msec) : 50=8.73%, 100=79.98%, 250=11.29% 00:21:05.357 cpu : usr=31.56%, sys=1.82%, ctx=969, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename1: (groupid=0, jobs=1): err= 0: pid=83591: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=179, BW=718KiB/s (735kB/s)(7188KiB/10013msec) 00:21:05.357 slat (usec): min=7, max=4042, avg=19.13, stdev=134.19 00:21:05.357 clat (msec): min=21, max=157, avg=88.98, stdev=21.18 00:21:05.357 lat (msec): min=21, max=157, avg=89.00, stdev=21.17 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 47], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 75], 00:21:05.357 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 91], 00:21:05.357 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 124], 00:21:05.357 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:21:05.357 | 99.99th=[ 159] 00:21:05.357 bw ( KiB/s): min= 512, max= 968, per=3.59%, avg=714.80, stdev=116.89, samples=20 00:21:05.357 iops : min= 128, max= 242, avg=178.70, stdev=29.22, samples=20 00:21:05.357 lat (msec) : 50=2.17%, 100=69.34%, 250=28.49% 00:21:05.357 cpu : usr=40.44%, sys=2.41%, ctx=1316, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=5.8%, 4=23.3%, 8=58.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=93.9%, 8=0.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename1: (groupid=0, jobs=1): err= 0: pid=83592: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=219, BW=877KiB/s (898kB/s)(8776KiB/10004msec) 00:21:05.357 slat (usec): min=8, max=8026, avg=23.83, stdev=235.23 00:21:05.357 clat (msec): min=9, max=145, avg=72.84, stdev=20.69 00:21:05.357 lat (msec): min=9, max=145, avg=72.87, stdev=20.69 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:21:05.357 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 78], 00:21:05.357 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 101], 95.00th=[ 110], 00:21:05.357 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 146], 00:21:05.357 | 99.99th=[ 146] 00:21:05.357 bw ( KiB/s): min= 768, max= 1072, per=4.37%, avg=869.89, stdev=79.77, samples=19 00:21:05.357 iops : min= 192, max= 268, avg=217.47, stdev=19.94, samples=19 00:21:05.357 lat (msec) : 10=0.32%, 50=12.81%, 100=76.94%, 250=9.94% 00:21:05.357 cpu : usr=41.22%, sys=2.35%, ctx=1257, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename1: (groupid=0, jobs=1): err= 0: pid=83593: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=218, BW=872KiB/s (893kB/s)(8744KiB/10023msec) 00:21:05.357 slat (usec): min=8, max=4025, avg=17.92, stdev=121.42 00:21:05.357 clat (msec): min=27, max=142, avg=73.24, stdev=19.89 00:21:05.357 lat (msec): min=27, max=142, avg=73.26, stdev=19.88 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 55], 00:21:05.357 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 78], 00:21:05.357 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 103], 95.00th=[ 109], 00:21:05.357 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:21:05.357 | 99.99th=[ 142] 00:21:05.357 bw ( KiB/s): min= 761, max= 1024, per=4.37%, avg=869.70, stdev=68.23, samples=20 00:21:05.357 iops : min= 190, max= 256, avg=217.40, stdev=17.07, samples=20 00:21:05.357 lat (msec) : 50=12.49%, 100=77.17%, 250=10.34% 00:21:05.357 cpu : usr=40.20%, sys=2.41%, ctx=1141, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename2: (groupid=0, jobs=1): err= 0: pid=83594: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=198, BW=796KiB/s (815kB/s)(7988KiB/10038msec) 00:21:05.357 slat (usec): min=7, max=8026, avg=21.03, stdev=253.56 00:21:05.357 clat (msec): min=38, max=152, avg=80.23, stdev=20.03 00:21:05.357 lat (msec): min=38, max=152, avg=80.25, stdev=20.03 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 61], 00:21:05.357 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:21:05.357 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:21:05.357 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 153], 00:21:05.357 | 99.99th=[ 153] 00:21:05.357 bw ( KiB/s): min= 637, max= 936, per=3.99%, avg=794.70, stdev=90.56, samples=20 00:21:05.357 iops : min= 159, max= 234, avg=198.65, stdev=22.66, samples=20 00:21:05.357 lat (msec) : 50=6.06%, 100=78.47%, 250=15.47% 00:21:05.357 cpu : usr=31.60%, sys=1.59%, ctx=1000, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=75.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename2: (groupid=0, jobs=1): err= 0: pid=83595: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=205, BW=821KiB/s (840kB/s)(8212KiB/10005msec) 00:21:05.357 slat (usec): min=5, max=6026, avg=16.59, stdev=132.92 00:21:05.357 clat (msec): min=6, max=144, avg=77.88, stdev=22.14 00:21:05.357 lat (msec): min=6, max=144, avg=77.89, stdev=22.13 00:21:05.357 clat percentiles (msec): 00:21:05.357 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:21:05.357 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:21:05.357 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:21:05.357 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 146], 00:21:05.357 | 99.99th=[ 146] 00:21:05.357 bw ( KiB/s): min= 640, max= 1024, per=4.06%, avg=808.00, stdev=117.42, samples=19 00:21:05.357 iops : min= 160, max= 256, avg=202.00, stdev=29.36, samples=19 00:21:05.357 lat (msec) : 10=0.49%, 20=0.29%, 50=11.40%, 100=73.40%, 250=14.42% 00:21:05.357 cpu : usr=31.81%, sys=1.62%, ctx=974, majf=0, minf=9 00:21:05.357 IO depths : 1=0.1%, 2=2.1%, 4=8.5%, 8=74.8%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:05.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.357 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.357 filename2: (groupid=0, jobs=1): err= 0: pid=83596: Mon Jul 15 22:51:18 2024 00:21:05.357 read: IOPS=225, BW=900KiB/s (922kB/s)(9004KiB/10004msec) 00:21:05.357 slat (usec): min=7, max=9024, avg=23.13, stdev=261.99 00:21:05.357 clat (msec): min=3, max=141, avg=71.02, stdev=22.21 00:21:05.358 lat (msec): min=3, max=141, avg=71.04, stdev=22.20 00:21:05.358 clat percentiles (msec): 00:21:05.358 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 51], 00:21:05.358 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 75], 00:21:05.358 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 108], 00:21:05.358 | 99.00th=[ 122], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:21:05.358 | 99.99th=[ 142] 00:21:05.358 bw ( KiB/s): min= 713, max= 1080, per=4.41%, avg=877.79, stdev=86.34, samples=19 00:21:05.358 iops : min= 178, max= 270, avg=219.32, stdev=21.61, samples=19 00:21:05.358 lat (msec) : 4=0.40%, 10=1.60%, 20=0.13%, 50=16.93%, 100=71.66% 00:21:05.358 lat (msec) : 250=9.28% 00:21:05.358 cpu : usr=31.82%, sys=1.95%, ctx=1061, majf=0, minf=9 00:21:05.358 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:05.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.358 filename2: (groupid=0, jobs=1): err= 0: pid=83597: Mon Jul 15 22:51:18 2024 00:21:05.358 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10030msec) 00:21:05.358 slat (nsec): min=5290, max=69087, avg=13656.54, stdev=6797.25 00:21:05.358 clat (msec): min=46, max=151, avg=84.99, stdev=19.85 00:21:05.358 lat (msec): min=46, max=151, avg=85.00, stdev=19.85 00:21:05.358 clat percentiles (msec): 00:21:05.358 | 1.00th=[ 48], 5.00th=[ 54], 10.00th=[ 60], 20.00th=[ 71], 00:21:05.358 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:21:05.358 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 120], 00:21:05.358 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:21:05.358 | 99.99th=[ 153] 00:21:05.358 bw ( KiB/s): min= 624, max= 928, per=3.77%, avg=749.60, stdev=100.86, samples=20 00:21:05.358 iops : min= 156, max= 232, avg=187.40, stdev=25.22, samples=20 00:21:05.358 lat (msec) : 50=2.44%, 100=76.80%, 250=20.75% 00:21:05.358 cpu : usr=39.87%, sys=2.19%, ctx=1203, majf=0, minf=9 00:21:05.358 IO depths : 1=0.1%, 2=4.5%, 4=18.0%, 8=64.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:21:05.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 complete : 0=0.0%, 4=92.2%, 8=3.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.358 filename2: (groupid=0, jobs=1): err= 0: pid=83598: Mon Jul 15 22:51:18 2024 00:21:05.358 read: IOPS=220, BW=883KiB/s (904kB/s)(8832KiB/10006msec) 00:21:05.358 slat (usec): min=7, max=4071, avg=20.01, stdev=148.61 00:21:05.358 clat (usec): min=1364, max=144066, avg=72398.88, stdev=22851.93 00:21:05.358 lat (usec): min=1372, max=144085, avg=72418.89, stdev=22846.49 00:21:05.358 clat percentiles (msec): 00:21:05.358 | 1.00th=[ 7], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:21:05.358 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 78], 00:21:05.358 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 104], 95.00th=[ 114], 00:21:05.358 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 144], 00:21:05.358 | 99.99th=[ 144] 00:21:05.358 bw ( KiB/s): min= 520, max= 1056, per=4.33%, avg=861.16, stdev=107.58, samples=19 00:21:05.358 iops : min= 130, max= 264, avg=215.21, stdev=26.91, samples=19 00:21:05.358 lat (msec) : 2=0.18%, 4=0.27%, 10=1.22%, 20=0.14%, 50=12.14% 00:21:05.358 lat (msec) : 100=73.96%, 250=12.09% 00:21:05.358 cpu : usr=41.67%, sys=1.99%, ctx=1373, majf=0, minf=9 00:21:05.358 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=82.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:05.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.358 filename2: (groupid=0, jobs=1): err= 0: pid=83599: Mon Jul 15 22:51:18 2024 00:21:05.358 read: IOPS=207, BW=830KiB/s (850kB/s)(8316KiB/10020msec) 00:21:05.358 slat (usec): min=4, max=8025, avg=23.42, stdev=263.49 00:21:05.358 clat (msec): min=27, max=143, avg=76.93, stdev=21.15 00:21:05.358 lat (msec): min=27, max=143, avg=76.96, stdev=21.14 00:21:05.358 clat percentiles (msec): 00:21:05.358 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 58], 00:21:05.358 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 83], 00:21:05.358 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 121], 00:21:05.358 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.358 | 99.99th=[ 144] 00:21:05.358 bw ( KiB/s): min= 528, max= 1000, per=4.16%, avg=827.20, stdev=105.12, samples=20 00:21:05.358 iops : min= 132, max= 250, avg=206.80, stdev=26.28, samples=20 00:21:05.358 lat (msec) : 50=12.36%, 100=73.93%, 250=13.71% 00:21:05.358 cpu : usr=35.83%, sys=1.93%, ctx=1014, majf=0, minf=9 00:21:05.358 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=77.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:21:05.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.358 filename2: (groupid=0, jobs=1): err= 0: pid=83600: Mon Jul 15 22:51:18 2024 00:21:05.358 read: IOPS=215, BW=862KiB/s (883kB/s)(8656KiB/10038msec) 00:21:05.358 slat (usec): min=7, max=8026, avg=20.75, stdev=229.15 00:21:05.358 clat (msec): min=33, max=144, avg=74.11, stdev=20.07 00:21:05.358 lat (msec): min=33, max=144, avg=74.13, stdev=20.08 00:21:05.358 clat percentiles (msec): 00:21:05.358 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:21:05.358 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 81], 00:21:05.358 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 109], 00:21:05.358 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.358 | 99.99th=[ 144] 00:21:05.358 bw ( KiB/s): min= 744, max= 1048, per=4.32%, avg=859.80, stdev=76.33, samples=20 00:21:05.358 iops : min= 186, max= 262, avg=214.90, stdev=19.11, samples=20 00:21:05.358 lat (msec) : 50=14.00%, 100=76.20%, 250=9.80% 00:21:05.358 cpu : usr=31.60%, sys=1.78%, ctx=954, majf=0, minf=9 00:21:05.358 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:05.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.358 filename2: (groupid=0, jobs=1): err= 0: pid=83601: Mon Jul 15 22:51:18 2024 00:21:05.358 read: IOPS=215, BW=860KiB/s (881kB/s)(8632KiB/10034msec) 00:21:05.358 slat (usec): min=8, max=4025, avg=21.45, stdev=172.59 00:21:05.358 clat (msec): min=35, max=144, avg=74.22, stdev=20.72 00:21:05.358 lat (msec): min=35, max=144, avg=74.24, stdev=20.72 00:21:05.358 clat percentiles (msec): 00:21:05.358 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 56], 00:21:05.358 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 79], 00:21:05.358 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 114], 00:21:05.358 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:05.358 | 99.99th=[ 144] 00:21:05.358 bw ( KiB/s): min= 573, max= 1024, per=4.32%, avg=859.50, stdev=105.41, samples=20 00:21:05.358 iops : min= 143, max= 256, avg=214.85, stdev=26.38, samples=20 00:21:05.358 lat (msec) : 50=11.54%, 100=76.78%, 250=11.68% 00:21:05.358 cpu : usr=42.64%, sys=2.24%, ctx=1151, majf=0, minf=9 00:21:05.358 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:05.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.358 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.358 00:21:05.358 Run status group 0 (all jobs): 00:21:05.358 READ: bw=19.4MiB/s (20.4MB/s), 718KiB/s-900KiB/s (735kB/s-922kB/s), io=195MiB (205MB), run=10004-10048msec 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.358 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 bdev_null0 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 [2024-07-15 22:51:19.202907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 bdev_null1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.359 { 00:21:05.359 "params": { 00:21:05.359 "name": "Nvme$subsystem", 00:21:05.359 "trtype": "$TEST_TRANSPORT", 00:21:05.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.359 "adrfam": "ipv4", 00:21:05.359 "trsvcid": "$NVMF_PORT", 00:21:05.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.359 "hdgst": ${hdgst:-false}, 00:21:05.359 "ddgst": ${ddgst:-false} 00:21:05.359 }, 00:21:05.359 "method": "bdev_nvme_attach_controller" 00:21:05.359 } 00:21:05.359 EOF 00:21:05.359 )") 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:05.359 { 00:21:05.359 "params": { 00:21:05.359 "name": "Nvme$subsystem", 00:21:05.359 "trtype": "$TEST_TRANSPORT", 00:21:05.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.359 "adrfam": "ipv4", 00:21:05.359 "trsvcid": "$NVMF_PORT", 00:21:05.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.359 "hdgst": ${hdgst:-false}, 00:21:05.359 "ddgst": ${ddgst:-false} 00:21:05.359 }, 00:21:05.359 "method": "bdev_nvme_attach_controller" 00:21:05.359 } 00:21:05.359 EOF 00:21:05.359 )") 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:05.359 "params": { 00:21:05.359 "name": "Nvme0", 00:21:05.359 "trtype": "tcp", 00:21:05.359 "traddr": "10.0.0.2", 00:21:05.359 "adrfam": "ipv4", 00:21:05.359 "trsvcid": "4420", 00:21:05.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.359 "hdgst": false, 00:21:05.359 "ddgst": false 00:21:05.359 }, 00:21:05.359 "method": "bdev_nvme_attach_controller" 00:21:05.359 },{ 00:21:05.359 "params": { 00:21:05.359 "name": "Nvme1", 00:21:05.359 "trtype": "tcp", 00:21:05.359 "traddr": "10.0.0.2", 00:21:05.359 "adrfam": "ipv4", 00:21:05.359 "trsvcid": "4420", 00:21:05.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.359 "hdgst": false, 00:21:05.359 "ddgst": false 00:21:05.359 }, 00:21:05.359 "method": "bdev_nvme_attach_controller" 00:21:05.359 }' 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.359 22:51:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.359 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:05.359 ... 00:21:05.359 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:05.359 ... 00:21:05.359 fio-3.35 00:21:05.359 Starting 4 threads 00:21:09.544 00:21:09.544 filename0: (groupid=0, jobs=1): err= 0: pid=83735: Mon Jul 15 22:51:25 2024 00:21:09.544 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5005msec) 00:21:09.544 slat (nsec): min=7038, max=48048, avg=11673.67, stdev=3894.35 00:21:09.544 clat (usec): min=648, max=10730, avg=3745.86, stdev=780.26 00:21:09.544 lat (usec): min=657, max=10737, avg=3757.54, stdev=780.57 00:21:09.544 clat percentiles (usec): 00:21:09.544 | 1.00th=[ 1401], 5.00th=[ 2573], 10.00th=[ 3228], 20.00th=[ 3261], 00:21:09.544 | 30.00th=[ 3326], 40.00th=[ 3523], 50.00th=[ 3785], 60.00th=[ 3851], 00:21:09.544 | 70.00th=[ 3884], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5080], 00:21:09.544 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 7570], 99.95th=[ 8094], 00:21:09.544 | 99.99th=[10552] 00:21:09.544 bw ( KiB/s): min=14946, max=19344, per=25.50%, avg=16938.78, stdev=1144.75, samples=9 00:21:09.544 iops : min= 1868, max= 2418, avg=2117.22, stdev=143.20, samples=9 00:21:09.544 lat (usec) : 750=0.06% 00:21:09.544 lat (msec) : 2=2.84%, 4=71.98%, 10=25.11%, 20=0.02% 00:21:09.544 cpu : usr=91.61%, sys=7.55%, ctx=10, majf=0, minf=0 00:21:09.544 IO depths : 1=0.1%, 2=8.7%, 4=63.3%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.544 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.544 issued rwts: total=10582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.544 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.544 filename0: (groupid=0, jobs=1): err= 0: pid=83736: Mon Jul 15 22:51:25 2024 00:21:09.544 read: IOPS=2101, BW=16.4MiB/s (17.2MB/s)(82.1MiB/5001msec) 00:21:09.544 slat (nsec): min=7633, max=51918, avg=14780.26, stdev=3779.59 00:21:09.544 clat (usec): min=979, max=11916, avg=3762.76, stdev=767.86 00:21:09.544 lat (usec): min=991, max=11932, avg=3777.54, stdev=768.00 00:21:09.545 clat percentiles (usec): 00:21:09.545 | 1.00th=[ 1860], 5.00th=[ 2540], 10.00th=[ 3195], 20.00th=[ 3261], 00:21:09.545 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 3785], 60.00th=[ 3851], 00:21:09.545 | 70.00th=[ 3916], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5080], 00:21:09.545 | 99.00th=[ 5473], 99.50th=[ 6194], 99.90th=[ 7701], 99.95th=[ 8160], 00:21:09.545 | 99.99th=[10683] 00:21:09.545 bw ( KiB/s): min=15728, max=18640, per=25.32%, avg=16817.78, stdev=797.33, samples=9 00:21:09.545 iops : min= 1966, max= 2330, avg=2102.22, stdev=99.67, samples=9 00:21:09.545 lat (usec) : 1000=0.01% 00:21:09.545 lat (msec) : 2=1.18%, 4=72.62%, 10=26.16%, 20=0.03% 00:21:09.545 cpu : usr=92.00%, sys=7.16%, ctx=11, majf=0, minf=9 00:21:09.545 IO depths : 1=0.1%, 2=8.7%, 4=63.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.545 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.545 issued rwts: total=10511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.545 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.545 filename1: (groupid=0, jobs=1): err= 0: pid=83737: Mon Jul 15 22:51:25 2024 00:21:09.545 read: IOPS=1986, BW=15.5MiB/s (16.3MB/s)(77.6MiB/5004msec) 00:21:09.545 slat (usec): min=3, max=101, avg=14.67, stdev= 4.48 00:21:09.545 clat (usec): min=1009, max=10721, avg=3979.14, stdev=814.05 00:21:09.545 lat (usec): min=1017, max=10729, avg=3993.81, stdev=814.10 00:21:09.545 clat percentiles (usec): 00:21:09.545 | 1.00th=[ 2008], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3294], 00:21:09.545 | 30.00th=[ 3392], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:21:09.545 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 5080], 95.00th=[ 5866], 00:21:09.545 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 7504], 99.95th=[ 8586], 00:21:09.545 | 99.99th=[10683] 00:21:09.545 bw ( KiB/s): min=12960, max=17024, per=23.78%, avg=15793.44, stdev=1488.52, samples=9 00:21:09.545 iops : min= 1620, max= 2128, avg=1974.11, stdev=186.11, samples=9 00:21:09.545 lat (msec) : 2=0.92%, 4=65.74%, 10=33.32%, 20=0.02% 00:21:09.545 cpu : usr=91.01%, sys=7.86%, ctx=38, majf=0, minf=9 00:21:09.545 IO depths : 1=0.1%, 2=12.1%, 4=61.0%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.545 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.545 issued rwts: total=9939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.545 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.545 filename1: (groupid=0, jobs=1): err= 0: pid=83738: Mon Jul 15 22:51:25 2024 00:21:09.545 read: IOPS=2102, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5001msec) 00:21:09.545 slat (nsec): min=7424, max=57704, avg=14951.38, stdev=3939.00 00:21:09.545 clat (usec): min=789, max=11928, avg=3758.54, stdev=765.25 00:21:09.545 lat (usec): min=797, max=11939, avg=3773.49, stdev=764.85 00:21:09.545 clat percentiles (usec): 00:21:09.545 | 1.00th=[ 1860], 5.00th=[ 2540], 10.00th=[ 3195], 20.00th=[ 3261], 00:21:09.545 | 30.00th=[ 3294], 40.00th=[ 3523], 50.00th=[ 3785], 60.00th=[ 3851], 00:21:09.545 | 70.00th=[ 3916], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5080], 00:21:09.545 | 99.00th=[ 5538], 99.50th=[ 5997], 99.90th=[ 7701], 99.95th=[ 8160], 00:21:09.545 | 99.99th=[10683] 00:21:09.545 bw ( KiB/s): min=15728, max=18640, per=25.34%, avg=16826.67, stdev=795.06, samples=9 00:21:09.545 iops : min= 1966, max= 2330, avg=2103.33, stdev=99.38, samples=9 00:21:09.545 lat (usec) : 1000=0.06% 00:21:09.545 lat (msec) : 2=1.17%, 4=72.66%, 10=26.08%, 20=0.03% 00:21:09.545 cpu : usr=91.46%, sys=7.66%, ctx=9, majf=0, minf=9 00:21:09.545 IO depths : 1=0.1%, 2=8.7%, 4=63.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.545 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.545 issued rwts: total=10517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.545 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.545 00:21:09.545 Run status group 0 (all jobs): 00:21:09.545 READ: bw=64.9MiB/s (68.0MB/s), 15.5MiB/s-16.5MiB/s (16.3MB/s-17.3MB/s), io=325MiB (340MB), run=5001-5005msec 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.802 00:21:09.802 real 0m23.493s 00:21:09.802 user 2m3.307s 00:21:09.802 sys 0m8.496s 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 ************************************ 00:21:09.802 END TEST fio_dif_rand_params 00:21:09.802 ************************************ 00:21:09.802 22:51:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:09.802 22:51:25 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:09.802 22:51:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:09.802 22:51:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 ************************************ 00:21:09.802 START TEST fio_dif_digest 00:21:09.802 ************************************ 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 bdev_null0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.802 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.060 [2024-07-15 22:51:25.373105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.060 { 00:21:10.060 "params": { 00:21:10.060 "name": "Nvme$subsystem", 00:21:10.060 "trtype": "$TEST_TRANSPORT", 00:21:10.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.060 "adrfam": "ipv4", 00:21:10.060 "trsvcid": "$NVMF_PORT", 00:21:10.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.060 "hdgst": ${hdgst:-false}, 00:21:10.060 "ddgst": ${ddgst:-false} 00:21:10.060 }, 00:21:10.060 "method": "bdev_nvme_attach_controller" 00:21:10.060 } 00:21:10.060 EOF 00:21:10.060 )") 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:10.060 "params": { 00:21:10.060 "name": "Nvme0", 00:21:10.060 "trtype": "tcp", 00:21:10.060 "traddr": "10.0.0.2", 00:21:10.060 "adrfam": "ipv4", 00:21:10.060 "trsvcid": "4420", 00:21:10.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.060 "hdgst": true, 00:21:10.060 "ddgst": true 00:21:10.060 }, 00:21:10.060 "method": "bdev_nvme_attach_controller" 00:21:10.060 }' 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:10.060 22:51:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:10.060 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:10.060 ... 00:21:10.060 fio-3.35 00:21:10.060 Starting 3 threads 00:21:22.263 00:21:22.263 filename0: (groupid=0, jobs=1): err= 0: pid=83844: Mon Jul 15 22:51:36 2024 00:21:22.263 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10004msec) 00:21:22.263 slat (nsec): min=7438, max=41358, avg=10658.45, stdev=3680.77 00:21:22.263 clat (usec): min=8554, max=16020, avg=13188.77, stdev=244.55 00:21:22.263 lat (usec): min=8563, max=16045, avg=13199.42, stdev=244.78 00:21:22.263 clat percentiles (usec): 00:21:22.263 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:22.263 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:22.263 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:21:22.263 | 99.00th=[13698], 99.50th=[13829], 99.90th=[16057], 99.95th=[16057], 00:21:22.263 | 99.99th=[16057] 00:21:22.263 bw ( KiB/s): min=28416, max=29184, per=33.30%, avg=29025.26, stdev=316.02, samples=19 00:21:22.263 iops : min= 222, max= 228, avg=226.74, stdev= 2.51, samples=19 00:21:22.263 lat (msec) : 10=0.13%, 20=99.87% 00:21:22.263 cpu : usr=91.09%, sys=8.36%, ctx=13, majf=0, minf=9 00:21:22.263 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.263 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.263 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:22.263 filename0: (groupid=0, jobs=1): err= 0: pid=83845: Mon Jul 15 22:51:36 2024 00:21:22.263 read: IOPS=226, BW=28.4MiB/s (29.8MB/s)(284MiB/10005msec) 00:21:22.263 slat (nsec): min=7537, max=57677, avg=11105.42, stdev=4306.71 00:21:22.263 clat (usec): min=12100, max=14009, avg=13188.83, stdev=151.62 00:21:22.263 lat (usec): min=12109, max=14022, avg=13199.93, stdev=151.83 00:21:22.263 clat percentiles (usec): 00:21:22.263 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:22.263 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:22.263 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:22.263 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:21:22.263 | 99.99th=[13960] 00:21:22.263 bw ( KiB/s): min=28416, max=29952, per=33.34%, avg=29062.74, stdev=385.12, samples=19 00:21:22.263 iops : min= 222, max= 234, avg=227.05, stdev= 3.01, samples=19 00:21:22.263 lat (msec) : 20=100.00% 00:21:22.263 cpu : usr=91.39%, sys=8.05%, ctx=22, majf=0, minf=0 00:21:22.264 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.264 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:22.264 filename0: (groupid=0, jobs=1): err= 0: pid=83846: Mon Jul 15 22:51:36 2024 00:21:22.264 read: IOPS=226, BW=28.4MiB/s (29.8MB/s)(284MiB/10005msec) 00:21:22.264 slat (nsec): min=7508, max=45984, avg=11068.99, stdev=4428.84 00:21:22.264 clat (usec): min=11072, max=13973, avg=13187.85, stdev=165.28 00:21:22.264 lat (usec): min=11080, max=13988, avg=13198.92, stdev=165.66 00:21:22.264 clat percentiles (usec): 00:21:22.264 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:22.264 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13173], 00:21:22.264 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13304], 95.00th=[13435], 00:21:22.264 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:21:22.264 | 99.99th=[13960] 00:21:22.264 bw ( KiB/s): min=28416, max=29184, per=33.34%, avg=29062.74, stdev=287.72, samples=19 00:21:22.264 iops : min= 222, max= 228, avg=227.05, stdev= 2.25, samples=19 00:21:22.264 lat (msec) : 20=100.00% 00:21:22.264 cpu : usr=91.50%, sys=7.89%, ctx=151, majf=0, minf=0 00:21:22.264 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.264 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:22.264 00:21:22.264 Run status group 0 (all jobs): 00:21:22.264 READ: bw=85.1MiB/s (89.3MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=852MiB (893MB), run=10004-10005msec 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.264 00:21:22.264 real 0m10.988s 00:21:22.264 user 0m28.032s 00:21:22.264 sys 0m2.710s 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.264 22:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:22.264 ************************************ 00:21:22.264 END TEST fio_dif_digest 00:21:22.264 ************************************ 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:22.264 22:51:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:22.264 22:51:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.264 rmmod nvme_tcp 00:21:22.264 rmmod nvme_fabrics 00:21:22.264 rmmod nvme_keyring 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83093 ']' 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83093 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83093 ']' 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83093 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83093 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.264 killing process with pid 83093 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83093' 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83093 00:21:22.264 22:51:36 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83093 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:22.264 22:51:36 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:22.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:22.264 Waiting for block devices as requested 00:21:22.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:22.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:22.264 22:51:37 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.264 22:51:37 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.264 22:51:37 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.264 22:51:37 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.264 22:51:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.264 22:51:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:22.264 22:51:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.264 22:51:37 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:22.264 00:21:22.264 real 0m59.696s 00:21:22.264 user 3m47.561s 00:21:22.264 sys 0m19.768s 00:21:22.264 22:51:37 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.264 22:51:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:22.264 ************************************ 00:21:22.264 END TEST nvmf_dif 00:21:22.264 ************************************ 00:21:22.264 22:51:37 -- common/autotest_common.sh@1142 -- # return 0 00:21:22.264 22:51:37 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:22.264 22:51:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:22.264 22:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.264 22:51:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.264 ************************************ 00:21:22.264 START TEST nvmf_abort_qd_sizes 00:21:22.264 ************************************ 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:22.264 * Looking for test storage... 00:21:22.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.264 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:22.265 Cannot find device "nvmf_tgt_br" 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.265 Cannot find device "nvmf_tgt_br2" 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:22.265 Cannot find device "nvmf_tgt_br" 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:22.265 Cannot find device "nvmf_tgt_br2" 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:22.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:21:22.265 00:21:22.265 --- 10.0.0.2 ping statistics --- 00:21:22.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.265 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:22.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:22.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:21:22.265 00:21:22.265 --- 10.0.0.3 ping statistics --- 00:21:22.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.265 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:22.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:21:22.265 00:21:22.265 --- 10.0.0.1 ping statistics --- 00:21:22.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.265 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:22.265 22:51:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:23.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.256 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:23.256 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:23.256 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84438 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84438 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84438 ']' 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.257 22:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:23.529 [2024-07-15 22:51:38.818212] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:21:23.529 [2024-07-15 22:51:38.818642] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.529 [2024-07-15 22:51:38.962199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.787 [2024-07-15 22:51:39.098602] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.787 [2024-07-15 22:51:39.098932] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.787 [2024-07-15 22:51:39.099091] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.787 [2024-07-15 22:51:39.099365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.787 [2024-07-15 22:51:39.099545] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.787 [2024-07-15 22:51:39.099808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.787 [2024-07-15 22:51:39.099952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.787 [2024-07-15 22:51:39.100011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.787 [2024-07-15 22:51:39.100015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.787 [2024-07-15 22:51:39.159850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 ************************************ 00:21:24.355 START TEST spdk_target_abort 00:21:24.355 ************************************ 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 spdk_targetn1 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 [2024-07-15 22:51:39.884049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:24.355 [2024-07-15 22:51:39.912217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:24.355 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:24.356 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:24.356 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:24.356 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:24.356 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:24.356 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:24.356 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:24.614 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:24.614 22:51:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:27.928 Initializing NVMe Controllers 00:21:27.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:27.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:27.928 Initialization complete. Launching workers. 00:21:27.928 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9948, failed: 0 00:21:27.928 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1036, failed to submit 8912 00:21:27.928 success 775, unsuccess 261, failed 0 00:21:27.929 22:51:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:27.929 22:51:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:31.212 Initializing NVMe Controllers 00:21:31.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:31.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:31.212 Initialization complete. Launching workers. 00:21:31.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8972, failed: 0 00:21:31.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7801 00:21:31.212 success 372, unsuccess 799, failed 0 00:21:31.212 22:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:31.212 22:51:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.498 Initializing NVMe Controllers 00:21:34.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:34.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:34.498 Initialization complete. Launching workers. 00:21:34.498 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31436, failed: 0 00:21:34.498 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2290, failed to submit 29146 00:21:34.498 success 446, unsuccess 1844, failed 0 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.498 22:51:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.757 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.757 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84438 00:21:34.757 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84438 ']' 00:21:34.757 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84438 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84438 00:21:35.017 killing process with pid 84438 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84438' 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84438 00:21:35.017 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84438 00:21:35.277 ************************************ 00:21:35.277 END TEST spdk_target_abort 00:21:35.277 ************************************ 00:21:35.277 00:21:35.277 real 0m10.789s 00:21:35.277 user 0m43.391s 00:21:35.277 sys 0m2.050s 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:35.277 22:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:35.277 22:51:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:35.277 22:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:35.277 22:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.277 22:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:35.277 ************************************ 00:21:35.277 START TEST kernel_target_abort 00:21:35.277 ************************************ 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:35.277 22:51:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:35.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.535 Waiting for block devices as requested 00:21:35.535 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:35.795 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:35.795 No valid GPT data, bailing 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:35.795 No valid GPT data, bailing 00:21:35.795 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:36.054 No valid GPT data, bailing 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:36.054 No valid GPT data, bailing 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:36.054 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 --hostid=e2358641-73b4-4563-bfad-61d761fbd8b0 -a 10.0.0.1 -t tcp -s 4420 00:21:36.054 00:21:36.054 Discovery Log Number of Records 2, Generation counter 2 00:21:36.054 =====Discovery Log Entry 0====== 00:21:36.054 trtype: tcp 00:21:36.054 adrfam: ipv4 00:21:36.054 subtype: current discovery subsystem 00:21:36.054 treq: not specified, sq flow control disable supported 00:21:36.054 portid: 1 00:21:36.054 trsvcid: 4420 00:21:36.054 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:36.054 traddr: 10.0.0.1 00:21:36.054 eflags: none 00:21:36.054 sectype: none 00:21:36.054 =====Discovery Log Entry 1====== 00:21:36.054 trtype: tcp 00:21:36.055 adrfam: ipv4 00:21:36.055 subtype: nvme subsystem 00:21:36.055 treq: not specified, sq flow control disable supported 00:21:36.055 portid: 1 00:21:36.055 trsvcid: 4420 00:21:36.055 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:36.055 traddr: 10.0.0.1 00:21:36.055 eflags: none 00:21:36.055 sectype: none 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:36.055 22:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:39.396 Initializing NVMe Controllers 00:21:39.396 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:39.396 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:39.396 Initialization complete. Launching workers. 00:21:39.396 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34602, failed: 0 00:21:39.396 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34602, failed to submit 0 00:21:39.396 success 0, unsuccess 34602, failed 0 00:21:39.396 22:51:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:39.396 22:51:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:42.684 Initializing NVMe Controllers 00:21:42.684 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:42.684 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:42.684 Initialization complete. Launching workers. 00:21:42.684 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69079, failed: 0 00:21:42.684 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29978, failed to submit 39101 00:21:42.685 success 0, unsuccess 29978, failed 0 00:21:42.685 22:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:42.685 22:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:45.967 Initializing NVMe Controllers 00:21:45.967 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:45.967 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:45.967 Initialization complete. Launching workers. 00:21:45.967 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83132, failed: 0 00:21:45.967 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20780, failed to submit 62352 00:21:45.967 success 0, unsuccess 20780, failed 0 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:45.967 22:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:46.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:48.445 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:48.445 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:48.445 00:21:48.445 real 0m13.212s 00:21:48.445 user 0m6.311s 00:21:48.445 sys 0m4.290s 00:21:48.445 22:52:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.445 ************************************ 00:21:48.445 END TEST kernel_target_abort 00:21:48.445 22:52:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:48.445 ************************************ 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.445 rmmod nvme_tcp 00:21:48.445 rmmod nvme_fabrics 00:21:48.445 rmmod nvme_keyring 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84438 ']' 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84438 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84438 ']' 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84438 00:21:48.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84438) - No such process 00:21:48.445 Process with pid 84438 is not found 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84438 is not found' 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:48.445 22:52:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:48.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:48.961 Waiting for block devices as requested 00:21:48.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:48.961 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:48.961 22:52:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.224 22:52:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:49.224 00:21:49.224 real 0m27.179s 00:21:49.224 user 0m50.840s 00:21:49.224 sys 0m7.652s 00:21:49.224 22:52:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.224 22:52:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:49.224 ************************************ 00:21:49.224 END TEST nvmf_abort_qd_sizes 00:21:49.224 ************************************ 00:21:49.224 22:52:04 -- common/autotest_common.sh@1142 -- # return 0 00:21:49.224 22:52:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:49.224 22:52:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:49.224 22:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.224 22:52:04 -- common/autotest_common.sh@10 -- # set +x 00:21:49.224 ************************************ 00:21:49.224 START TEST keyring_file 00:21:49.224 ************************************ 00:21:49.224 22:52:04 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:49.224 * Looking for test storage... 00:21:49.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:49.224 22:52:04 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.225 22:52:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.225 22:52:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.225 22:52:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.225 22:52:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.225 22:52:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.225 22:52:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.225 22:52:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:49.225 22:52:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ckLQ0Nfalc 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ckLQ0Nfalc 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ckLQ0Nfalc 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ckLQ0Nfalc 00:21:49.225 22:52:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cE32rJLeXK 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:49.225 22:52:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cE32rJLeXK 00:21:49.225 22:52:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cE32rJLeXK 00:21:49.226 22:52:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cE32rJLeXK 00:21:49.226 22:52:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=85308 00:21:49.226 22:52:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85308 00:21:49.226 22:52:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85308 ']' 00:21:49.226 22:52:04 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:49.226 22:52:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.226 22:52:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.226 22:52:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.226 22:52:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.226 22:52:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:49.484 [2024-07-15 22:52:04.850922] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:21:49.484 [2024-07-15 22:52:04.851013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85308 ] 00:21:49.484 [2024-07-15 22:52:04.988248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.742 [2024-07-15 22:52:05.110455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.742 [2024-07-15 22:52:05.166342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:50.306 22:52:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:50.306 [2024-07-15 22:52:05.799594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.306 null0 00:21:50.306 [2024-07-15 22:52:05.831536] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.306 [2024-07-15 22:52:05.831752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:50.306 [2024-07-15 22:52:05.839531] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.306 22:52:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:50.306 [2024-07-15 22:52:05.851534] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:50.306 request: 00:21:50.306 { 00:21:50.306 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.306 "secure_channel": false, 00:21:50.306 "listen_address": { 00:21:50.306 "trtype": "tcp", 00:21:50.306 "traddr": "127.0.0.1", 00:21:50.306 "trsvcid": "4420" 00:21:50.306 }, 00:21:50.306 "method": "nvmf_subsystem_add_listener", 00:21:50.306 "req_id": 1 00:21:50.306 } 00:21:50.306 Got JSON-RPC error response 00:21:50.306 response: 00:21:50.306 { 00:21:50.306 "code": -32602, 00:21:50.306 "message": "Invalid parameters" 00:21:50.306 } 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.306 22:52:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=85325 00:21:50.306 22:52:05 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:50.306 22:52:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85325 /var/tmp/bperf.sock 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85325 ']' 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:50.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.306 22:52:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:50.564 [2024-07-15 22:52:05.924167] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:21:50.564 [2024-07-15 22:52:05.924304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85325 ] 00:21:50.564 [2024-07-15 22:52:06.067115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.822 [2024-07-15 22:52:06.199476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.822 [2024-07-15 22:52:06.255140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:51.436 22:52:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.436 22:52:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:51.436 22:52:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:51.436 22:52:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:51.694 22:52:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cE32rJLeXK 00:21:51.694 22:52:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cE32rJLeXK 00:21:51.953 22:52:07 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:51.953 22:52:07 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:51.953 22:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.953 22:52:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.953 22:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:52.211 22:52:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ckLQ0Nfalc == \/\t\m\p\/\t\m\p\.\c\k\L\Q\0\N\f\a\l\c ]] 00:21:52.211 22:52:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:52.211 22:52:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:52.211 22:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.211 22:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:52.211 22:52:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.470 22:52:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cE32rJLeXK == \/\t\m\p\/\t\m\p\.\c\E\3\2\r\J\L\e\X\K ]] 00:21:52.470 22:52:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:52.470 22:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:52.470 22:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:52.470 22:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.470 22:52:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.470 22:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:52.729 22:52:08 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:52.729 22:52:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:52.729 22:52:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:52.729 22:52:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:52.729 22:52:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.729 22:52:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:52.729 22:52:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.987 22:52:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:52.987 22:52:08 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:52.987 22:52:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.246 [2024-07-15 22:52:08.561445] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.246 nvme0n1 00:21:53.246 22:52:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:53.246 22:52:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:53.246 22:52:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.246 22:52:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.246 22:52:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.246 22:52:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.506 22:52:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:53.506 22:52:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:53.506 22:52:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:53.506 22:52:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.506 22:52:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:53.506 22:52:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.506 22:52:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.764 22:52:09 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:53.764 22:52:09 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:54.024 Running I/O for 1 seconds... 00:21:54.987 00:21:54.987 Latency(us) 00:21:54.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.987 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:54.987 nvme0n1 : 1.01 11604.76 45.33 0.00 0.00 10998.03 5093.93 19184.17 00:21:54.987 =================================================================================================================== 00:21:54.987 Total : 11604.76 45.33 0.00 0.00 10998.03 5093.93 19184.17 00:21:54.987 0 00:21:54.987 22:52:10 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:54.987 22:52:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:55.248 22:52:10 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:55.248 22:52:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:55.248 22:52:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.248 22:52:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.248 22:52:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.248 22:52:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.507 22:52:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:55.507 22:52:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:55.507 22:52:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:55.507 22:52:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.507 22:52:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.507 22:52:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.507 22:52:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:55.765 22:52:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:55.765 22:52:11 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.765 22:52:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:55.765 22:52:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:56.023 [2024-07-15 22:52:11.446317] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:56.023 [2024-07-15 22:52:11.447272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17372e0 (107): Transport endpoint is not connected 00:21:56.023 [2024-07-15 22:52:11.448262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17372e0 (9): Bad file descriptor 00:21:56.023 [2024-07-15 22:52:11.449259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:56.023 [2024-07-15 22:52:11.449280] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:56.023 [2024-07-15 22:52:11.449291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:56.023 request: 00:21:56.023 { 00:21:56.023 "name": "nvme0", 00:21:56.023 "trtype": "tcp", 00:21:56.023 "traddr": "127.0.0.1", 00:21:56.023 "adrfam": "ipv4", 00:21:56.023 "trsvcid": "4420", 00:21:56.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:56.023 "prchk_reftag": false, 00:21:56.023 "prchk_guard": false, 00:21:56.023 "hdgst": false, 00:21:56.023 "ddgst": false, 00:21:56.023 "psk": "key1", 00:21:56.023 "method": "bdev_nvme_attach_controller", 00:21:56.023 "req_id": 1 00:21:56.023 } 00:21:56.023 Got JSON-RPC error response 00:21:56.023 response: 00:21:56.023 { 00:21:56.023 "code": -5, 00:21:56.023 "message": "Input/output error" 00:21:56.023 } 00:21:56.023 22:52:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:56.023 22:52:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.023 22:52:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.023 22:52:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.023 22:52:11 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:56.023 22:52:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:56.023 22:52:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:56.023 22:52:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:56.023 22:52:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:56.023 22:52:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.281 22:52:11 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:56.281 22:52:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:56.281 22:52:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:56.281 22:52:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:56.281 22:52:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:56.281 22:52:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:56.281 22:52:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.539 22:52:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:56.539 22:52:12 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:56.539 22:52:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:56.799 22:52:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:56.799 22:52:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:57.059 22:52:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:57.059 22:52:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:57.059 22:52:12 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:57.316 22:52:12 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:57.317 22:52:12 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ckLQ0Nfalc 00:21:57.317 22:52:12 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.317 22:52:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:57.317 22:52:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:57.575 [2024-07-15 22:52:12.908631] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ckLQ0Nfalc': 0100660 00:21:57.575 [2024-07-15 22:52:12.908673] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:57.575 request: 00:21:57.575 { 00:21:57.575 "name": "key0", 00:21:57.575 "path": "/tmp/tmp.ckLQ0Nfalc", 00:21:57.575 "method": "keyring_file_add_key", 00:21:57.575 "req_id": 1 00:21:57.575 } 00:21:57.575 Got JSON-RPC error response 00:21:57.575 response: 00:21:57.575 { 00:21:57.575 "code": -1, 00:21:57.575 "message": "Operation not permitted" 00:21:57.575 } 00:21:57.575 22:52:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:57.575 22:52:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.576 22:52:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.576 22:52:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.576 22:52:12 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ckLQ0Nfalc 00:21:57.576 22:52:12 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:57.576 22:52:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ckLQ0Nfalc 00:21:57.833 22:52:13 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ckLQ0Nfalc 00:21:57.833 22:52:13 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:57.833 22:52:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:57.833 22:52:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:57.833 22:52:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:57.834 22:52:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:57.834 22:52:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.092 22:52:13 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:58.092 22:52:13 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:58.092 22:52:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.092 22:52:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.092 [2024-07-15 22:52:13.652803] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ckLQ0Nfalc': No such file or directory 00:21:58.092 [2024-07-15 22:52:13.652855] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:58.092 [2024-07-15 22:52:13.652881] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:58.092 [2024-07-15 22:52:13.652890] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:58.092 [2024-07-15 22:52:13.652899] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:58.350 request: 00:21:58.350 { 00:21:58.350 "name": "nvme0", 00:21:58.350 "trtype": "tcp", 00:21:58.350 "traddr": "127.0.0.1", 00:21:58.350 "adrfam": "ipv4", 00:21:58.350 "trsvcid": "4420", 00:21:58.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:58.350 "prchk_reftag": false, 00:21:58.350 "prchk_guard": false, 00:21:58.350 "hdgst": false, 00:21:58.350 "ddgst": false, 00:21:58.350 "psk": "key0", 00:21:58.350 "method": "bdev_nvme_attach_controller", 00:21:58.350 "req_id": 1 00:21:58.350 } 00:21:58.350 Got JSON-RPC error response 00:21:58.350 response: 00:21:58.350 { 00:21:58.350 "code": -19, 00:21:58.350 "message": "No such device" 00:21:58.350 } 00:21:58.350 22:52:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:58.350 22:52:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:58.350 22:52:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:58.350 22:52:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:58.350 22:52:13 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:58.350 22:52:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:58.609 22:52:13 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GzuffrLl7j 00:21:58.609 22:52:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:58.609 22:52:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:58.609 22:52:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:58.609 22:52:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:58.609 22:52:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:58.609 22:52:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:58.609 22:52:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:58.609 22:52:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GzuffrLl7j 00:21:58.609 22:52:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GzuffrLl7j 00:21:58.609 22:52:14 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.GzuffrLl7j 00:21:58.609 22:52:14 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GzuffrLl7j 00:21:58.609 22:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GzuffrLl7j 00:21:58.871 22:52:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.871 22:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:59.133 nvme0n1 00:21:59.133 22:52:14 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:59.133 22:52:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:59.133 22:52:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.133 22:52:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.133 22:52:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.133 22:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.392 22:52:14 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:59.392 22:52:14 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:59.392 22:52:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:59.651 22:52:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:59.651 22:52:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:59.651 22:52:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.651 22:52:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.651 22:52:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.908 22:52:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:59.908 22:52:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:59.908 22:52:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:59.908 22:52:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.908 22:52:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.908 22:52:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.908 22:52:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.166 22:52:15 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:22:00.166 22:52:15 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:00.166 22:52:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:00.425 22:52:15 keyring_file -- keyring/file.sh@104 -- # jq length 00:22:00.425 22:52:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:22:00.425 22:52:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.684 22:52:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:22:00.684 22:52:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GzuffrLl7j 00:22:00.684 22:52:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GzuffrLl7j 00:22:00.942 22:52:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cE32rJLeXK 00:22:00.942 22:52:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cE32rJLeXK 00:22:01.201 22:52:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:01.201 22:52:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:01.459 nvme0n1 00:22:01.459 22:52:16 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:22:01.459 22:52:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:01.718 22:52:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:22:01.718 "subsystems": [ 00:22:01.718 { 00:22:01.718 "subsystem": "keyring", 00:22:01.718 "config": [ 00:22:01.718 { 00:22:01.718 "method": "keyring_file_add_key", 00:22:01.718 "params": { 00:22:01.718 "name": "key0", 00:22:01.718 "path": "/tmp/tmp.GzuffrLl7j" 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "keyring_file_add_key", 00:22:01.718 "params": { 00:22:01.718 "name": "key1", 00:22:01.718 "path": "/tmp/tmp.cE32rJLeXK" 00:22:01.718 } 00:22:01.718 } 00:22:01.718 ] 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "subsystem": "iobuf", 00:22:01.718 "config": [ 00:22:01.718 { 00:22:01.718 "method": "iobuf_set_options", 00:22:01.718 "params": { 00:22:01.718 "small_pool_count": 8192, 00:22:01.718 "large_pool_count": 1024, 00:22:01.718 "small_bufsize": 8192, 00:22:01.718 "large_bufsize": 135168 00:22:01.718 } 00:22:01.718 } 00:22:01.718 ] 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "subsystem": "sock", 00:22:01.718 "config": [ 00:22:01.718 { 00:22:01.718 "method": "sock_set_default_impl", 00:22:01.718 "params": { 00:22:01.718 "impl_name": "uring" 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "sock_impl_set_options", 00:22:01.718 "params": { 00:22:01.718 "impl_name": "ssl", 00:22:01.718 "recv_buf_size": 4096, 00:22:01.718 "send_buf_size": 4096, 00:22:01.718 "enable_recv_pipe": true, 00:22:01.718 "enable_quickack": false, 00:22:01.718 "enable_placement_id": 0, 00:22:01.718 "enable_zerocopy_send_server": true, 00:22:01.718 "enable_zerocopy_send_client": false, 00:22:01.718 "zerocopy_threshold": 0, 00:22:01.718 "tls_version": 0, 00:22:01.718 "enable_ktls": false 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "sock_impl_set_options", 00:22:01.718 "params": { 00:22:01.718 "impl_name": "posix", 00:22:01.718 "recv_buf_size": 2097152, 00:22:01.718 "send_buf_size": 2097152, 00:22:01.718 "enable_recv_pipe": true, 00:22:01.718 "enable_quickack": false, 00:22:01.718 "enable_placement_id": 0, 00:22:01.718 "enable_zerocopy_send_server": true, 00:22:01.718 "enable_zerocopy_send_client": false, 00:22:01.718 "zerocopy_threshold": 0, 00:22:01.718 "tls_version": 0, 00:22:01.718 "enable_ktls": false 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "sock_impl_set_options", 00:22:01.718 "params": { 00:22:01.718 "impl_name": "uring", 00:22:01.718 "recv_buf_size": 2097152, 00:22:01.718 "send_buf_size": 2097152, 00:22:01.718 "enable_recv_pipe": true, 00:22:01.718 "enable_quickack": false, 00:22:01.718 "enable_placement_id": 0, 00:22:01.718 "enable_zerocopy_send_server": false, 00:22:01.718 "enable_zerocopy_send_client": false, 00:22:01.718 "zerocopy_threshold": 0, 00:22:01.718 "tls_version": 0, 00:22:01.718 "enable_ktls": false 00:22:01.718 } 00:22:01.718 } 00:22:01.718 ] 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "subsystem": "vmd", 00:22:01.718 "config": [] 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "subsystem": "accel", 00:22:01.718 "config": [ 00:22:01.718 { 00:22:01.718 "method": "accel_set_options", 00:22:01.718 "params": { 00:22:01.718 "small_cache_size": 128, 00:22:01.718 "large_cache_size": 16, 00:22:01.718 "task_count": 2048, 00:22:01.718 "sequence_count": 2048, 00:22:01.718 "buf_count": 2048 00:22:01.718 } 00:22:01.718 } 00:22:01.718 ] 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "subsystem": "bdev", 00:22:01.718 "config": [ 00:22:01.718 { 00:22:01.718 "method": "bdev_set_options", 00:22:01.718 "params": { 00:22:01.718 "bdev_io_pool_size": 65535, 00:22:01.718 "bdev_io_cache_size": 256, 00:22:01.718 "bdev_auto_examine": true, 00:22:01.718 "iobuf_small_cache_size": 128, 00:22:01.718 "iobuf_large_cache_size": 16 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "bdev_raid_set_options", 00:22:01.718 "params": { 00:22:01.718 "process_window_size_kb": 1024 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "bdev_iscsi_set_options", 00:22:01.718 "params": { 00:22:01.718 "timeout_sec": 30 00:22:01.718 } 00:22:01.718 }, 00:22:01.718 { 00:22:01.718 "method": "bdev_nvme_set_options", 00:22:01.718 "params": { 00:22:01.718 "action_on_timeout": "none", 00:22:01.718 "timeout_us": 0, 00:22:01.719 "timeout_admin_us": 0, 00:22:01.719 "keep_alive_timeout_ms": 10000, 00:22:01.719 "arbitration_burst": 0, 00:22:01.719 "low_priority_weight": 0, 00:22:01.719 "medium_priority_weight": 0, 00:22:01.719 "high_priority_weight": 0, 00:22:01.719 "nvme_adminq_poll_period_us": 10000, 00:22:01.719 "nvme_ioq_poll_period_us": 0, 00:22:01.719 "io_queue_requests": 512, 00:22:01.719 "delay_cmd_submit": true, 00:22:01.719 "transport_retry_count": 4, 00:22:01.719 "bdev_retry_count": 3, 00:22:01.719 "transport_ack_timeout": 0, 00:22:01.719 "ctrlr_loss_timeout_sec": 0, 00:22:01.719 "reconnect_delay_sec": 0, 00:22:01.719 "fast_io_fail_timeout_sec": 0, 00:22:01.719 "disable_auto_failback": false, 00:22:01.719 "generate_uuids": false, 00:22:01.719 "transport_tos": 0, 00:22:01.719 "nvme_error_stat": false, 00:22:01.719 "rdma_srq_size": 0, 00:22:01.719 "io_path_stat": false, 00:22:01.719 "allow_accel_sequence": false, 00:22:01.719 "rdma_max_cq_size": 0, 00:22:01.719 "rdma_cm_event_timeout_ms": 0, 00:22:01.719 "dhchap_digests": [ 00:22:01.719 "sha256", 00:22:01.719 "sha384", 00:22:01.719 "sha512" 00:22:01.719 ], 00:22:01.719 "dhchap_dhgroups": [ 00:22:01.719 "null", 00:22:01.719 "ffdhe2048", 00:22:01.719 "ffdhe3072", 00:22:01.719 "ffdhe4096", 00:22:01.719 "ffdhe6144", 00:22:01.719 "ffdhe8192" 00:22:01.719 ] 00:22:01.719 } 00:22:01.719 }, 00:22:01.719 { 00:22:01.719 "method": "bdev_nvme_attach_controller", 00:22:01.719 "params": { 00:22:01.719 "name": "nvme0", 00:22:01.719 "trtype": "TCP", 00:22:01.719 "adrfam": "IPv4", 00:22:01.719 "traddr": "127.0.0.1", 00:22:01.719 "trsvcid": "4420", 00:22:01.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.719 "prchk_reftag": false, 00:22:01.719 "prchk_guard": false, 00:22:01.719 "ctrlr_loss_timeout_sec": 0, 00:22:01.719 "reconnect_delay_sec": 0, 00:22:01.719 "fast_io_fail_timeout_sec": 0, 00:22:01.719 "psk": "key0", 00:22:01.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:01.719 "hdgst": false, 00:22:01.719 "ddgst": false 00:22:01.719 } 00:22:01.719 }, 00:22:01.719 { 00:22:01.719 "method": "bdev_nvme_set_hotplug", 00:22:01.719 "params": { 00:22:01.719 "period_us": 100000, 00:22:01.719 "enable": false 00:22:01.719 } 00:22:01.719 }, 00:22:01.719 { 00:22:01.719 "method": "bdev_wait_for_examine" 00:22:01.719 } 00:22:01.719 ] 00:22:01.719 }, 00:22:01.719 { 00:22:01.719 "subsystem": "nbd", 00:22:01.719 "config": [] 00:22:01.719 } 00:22:01.719 ] 00:22:01.719 }' 00:22:01.719 22:52:17 keyring_file -- keyring/file.sh@114 -- # killprocess 85325 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85325 ']' 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85325 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85325 00:22:01.719 killing process with pid 85325 00:22:01.719 Received shutdown signal, test time was about 1.000000 seconds 00:22:01.719 00:22:01.719 Latency(us) 00:22:01.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.719 =================================================================================================================== 00:22:01.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85325' 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@967 -- # kill 85325 00:22:01.719 22:52:17 keyring_file -- common/autotest_common.sh@972 -- # wait 85325 00:22:01.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.978 22:52:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=85569 00:22:01.978 22:52:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85569 /var/tmp/bperf.sock 00:22:01.978 22:52:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85569 ']' 00:22:01.978 22:52:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.978 22:52:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.978 22:52:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.978 22:52:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.978 22:52:17 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:01.978 22:52:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.978 22:52:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:22:01.978 "subsystems": [ 00:22:01.978 { 00:22:01.978 "subsystem": "keyring", 00:22:01.978 "config": [ 00:22:01.978 { 00:22:01.978 "method": "keyring_file_add_key", 00:22:01.978 "params": { 00:22:01.978 "name": "key0", 00:22:01.978 "path": "/tmp/tmp.GzuffrLl7j" 00:22:01.978 } 00:22:01.978 }, 00:22:01.978 { 00:22:01.978 "method": "keyring_file_add_key", 00:22:01.978 "params": { 00:22:01.978 "name": "key1", 00:22:01.978 "path": "/tmp/tmp.cE32rJLeXK" 00:22:01.978 } 00:22:01.978 } 00:22:01.978 ] 00:22:01.978 }, 00:22:01.978 { 00:22:01.978 "subsystem": "iobuf", 00:22:01.978 "config": [ 00:22:01.978 { 00:22:01.978 "method": "iobuf_set_options", 00:22:01.978 "params": { 00:22:01.978 "small_pool_count": 8192, 00:22:01.978 "large_pool_count": 1024, 00:22:01.978 "small_bufsize": 8192, 00:22:01.978 "large_bufsize": 135168 00:22:01.978 } 00:22:01.978 } 00:22:01.978 ] 00:22:01.978 }, 00:22:01.978 { 00:22:01.978 "subsystem": "sock", 00:22:01.978 "config": [ 00:22:01.978 { 00:22:01.978 "method": "sock_set_default_impl", 00:22:01.978 "params": { 00:22:01.978 "impl_name": "uring" 00:22:01.978 } 00:22:01.978 }, 00:22:01.978 { 00:22:01.978 "method": "sock_impl_set_options", 00:22:01.978 "params": { 00:22:01.978 "impl_name": "ssl", 00:22:01.978 "recv_buf_size": 4096, 00:22:01.978 "send_buf_size": 4096, 00:22:01.978 "enable_recv_pipe": true, 00:22:01.978 "enable_quickack": false, 00:22:01.978 "enable_placement_id": 0, 00:22:01.978 "enable_zerocopy_send_server": true, 00:22:01.978 "enable_zerocopy_send_client": false, 00:22:01.978 "zerocopy_threshold": 0, 00:22:01.978 "tls_version": 0, 00:22:01.978 "enable_ktls": false 00:22:01.978 } 00:22:01.978 }, 00:22:01.978 { 00:22:01.978 "method": "sock_impl_set_options", 00:22:01.978 "params": { 00:22:01.978 "impl_name": "posix", 00:22:01.978 "recv_buf_size": 2097152, 00:22:01.978 "send_buf_size": 2097152, 00:22:01.978 "enable_recv_pipe": true, 00:22:01.978 "enable_quickack": false, 00:22:01.978 "enable_placement_id": 0, 00:22:01.979 "enable_zerocopy_send_server": true, 00:22:01.979 "enable_zerocopy_send_client": false, 00:22:01.979 "zerocopy_threshold": 0, 00:22:01.979 "tls_version": 0, 00:22:01.979 "enable_ktls": false 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "sock_impl_set_options", 00:22:01.979 "params": { 00:22:01.979 "impl_name": "uring", 00:22:01.979 "recv_buf_size": 2097152, 00:22:01.979 "send_buf_size": 2097152, 00:22:01.979 "enable_recv_pipe": true, 00:22:01.979 "enable_quickack": false, 00:22:01.979 "enable_placement_id": 0, 00:22:01.979 "enable_zerocopy_send_server": false, 00:22:01.979 "enable_zerocopy_send_client": false, 00:22:01.979 "zerocopy_threshold": 0, 00:22:01.979 "tls_version": 0, 00:22:01.979 "enable_ktls": false 00:22:01.979 } 00:22:01.979 } 00:22:01.979 ] 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "subsystem": "vmd", 00:22:01.979 "config": [] 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "subsystem": "accel", 00:22:01.979 "config": [ 00:22:01.979 { 00:22:01.979 "method": "accel_set_options", 00:22:01.979 "params": { 00:22:01.979 "small_cache_size": 128, 00:22:01.979 "large_cache_size": 16, 00:22:01.979 "task_count": 2048, 00:22:01.979 "sequence_count": 2048, 00:22:01.979 "buf_count": 2048 00:22:01.979 } 00:22:01.979 } 00:22:01.979 ] 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "subsystem": "bdev", 00:22:01.979 "config": [ 00:22:01.979 { 00:22:01.979 "method": "bdev_set_options", 00:22:01.979 "params": { 00:22:01.979 "bdev_io_pool_size": 65535, 00:22:01.979 "bdev_io_cache_size": 256, 00:22:01.979 "bdev_auto_examine": true, 00:22:01.979 "iobuf_small_cache_size": 128, 00:22:01.979 "iobuf_large_cache_size": 16 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "bdev_raid_set_options", 00:22:01.979 "params": { 00:22:01.979 "process_window_size_kb": 1024 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "bdev_iscsi_set_options", 00:22:01.979 "params": { 00:22:01.979 "timeout_sec": 30 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "bdev_nvme_set_options", 00:22:01.979 "params": { 00:22:01.979 "action_on_timeout": "none", 00:22:01.979 "timeout_us": 0, 00:22:01.979 "timeout_admin_us": 0, 00:22:01.979 "keep_alive_timeout_ms": 10000, 00:22:01.979 "arbitration_burst": 0, 00:22:01.979 "low_priority_weight": 0, 00:22:01.979 "medium_priority_weight": 0, 00:22:01.979 "high_priority_weight": 0, 00:22:01.979 "nvme_adminq_poll_period_us": 10000, 00:22:01.979 "nvme_ioq_poll_period_us": 0, 00:22:01.979 "io_queue_requests": 512, 00:22:01.979 "delay_cmd_submit": true, 00:22:01.979 "transport_retry_count": 4, 00:22:01.979 "bdev_retry_count": 3, 00:22:01.979 "transport_ack_timeout": 0, 00:22:01.979 "ctrlr_loss_timeout_sec": 0, 00:22:01.979 "reconnect_delay_sec": 0, 00:22:01.979 "fast_io_fail_timeout_sec": 0, 00:22:01.979 "disable_auto_failback": false, 00:22:01.979 "generate_uuids": false, 00:22:01.979 "transport_tos": 0, 00:22:01.979 "nvme_error_stat": false, 00:22:01.979 "rdma_srq_size": 0, 00:22:01.979 "io_path_stat": false, 00:22:01.979 "allow_accel_sequence": false, 00:22:01.979 "rdma_max_cq_size": 0, 00:22:01.979 "rdma_cm_event_timeout_ms": 0, 00:22:01.979 "dhchap_digests": [ 00:22:01.979 "sha256", 00:22:01.979 "sha384", 00:22:01.979 "sha512" 00:22:01.979 ], 00:22:01.979 "dhchap_dhgroups": [ 00:22:01.979 "null", 00:22:01.979 "ffdhe2048", 00:22:01.979 "ffdhe3072", 00:22:01.979 "ffdhe4096", 00:22:01.979 "ffdhe6144", 00:22:01.979 "ffdhe8192" 00:22:01.979 ] 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "bdev_nvme_attach_controller", 00:22:01.979 "params": { 00:22:01.979 "name": "nvme0", 00:22:01.979 "trtype": "TCP", 00:22:01.979 "adrfam": "IPv4", 00:22:01.979 "traddr": "127.0.0.1", 00:22:01.979 "trsvcid": "4420", 00:22:01.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.979 "prchk_reftag": false, 00:22:01.979 "prchk_guard": false, 00:22:01.979 "ctrlr_loss_timeout_sec": 0, 00:22:01.979 "reconnect_delay_sec": 0, 00:22:01.979 "fast_io_fail_timeout_sec": 0, 00:22:01.979 "psk": "key0", 00:22:01.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:01.979 "hdgst": false, 00:22:01.979 "ddgst": false 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "bdev_nvme_set_hotplug", 00:22:01.979 "params": { 00:22:01.979 "period_us": 100000, 00:22:01.979 "enable": false 00:22:01.979 } 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "method": "bdev_wait_for_examine" 00:22:01.979 } 00:22:01.979 ] 00:22:01.979 }, 00:22:01.979 { 00:22:01.979 "subsystem": "nbd", 00:22:01.979 "config": [] 00:22:01.979 } 00:22:01.979 ] 00:22:01.979 }' 00:22:01.979 [2024-07-15 22:52:17.415633] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:22:01.979 [2024-07-15 22:52:17.416016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85569 ] 00:22:02.238 [2024-07-15 22:52:17.552943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.238 [2024-07-15 22:52:17.664275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.238 [2024-07-15 22:52:17.797980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:02.496 [2024-07-15 22:52:17.851359] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.069 22:52:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.069 22:52:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:22:03.069 22:52:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:22:03.069 22:52:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:22:03.069 22:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.069 22:52:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:22:03.069 22:52:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:22:03.069 22:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.069 22:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:03.069 22:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.069 22:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.069 22:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:03.346 22:52:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:03.346 22:52:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:22:03.346 22:52:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:03.346 22:52:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.346 22:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.346 22:52:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.346 22:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:03.604 22:52:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:22:03.604 22:52:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:22:03.604 22:52:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:22:03.604 22:52:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:03.863 22:52:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:22:03.863 22:52:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:03.863 22:52:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.GzuffrLl7j /tmp/tmp.cE32rJLeXK 00:22:03.863 22:52:19 keyring_file -- keyring/file.sh@20 -- # killprocess 85569 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85569 ']' 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85569 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85569 00:22:03.863 killing process with pid 85569 00:22:03.863 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.863 00:22:03.863 Latency(us) 00:22:03.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.863 =================================================================================================================== 00:22:03.863 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85569' 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@967 -- # kill 85569 00:22:03.863 22:52:19 keyring_file -- common/autotest_common.sh@972 -- # wait 85569 00:22:04.122 22:52:19 keyring_file -- keyring/file.sh@21 -- # killprocess 85308 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85308 ']' 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85308 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85308 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.122 killing process with pid 85308 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85308' 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@967 -- # kill 85308 00:22:04.122 [2024-07-15 22:52:19.611260] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:04.122 22:52:19 keyring_file -- common/autotest_common.sh@972 -- # wait 85308 00:22:04.753 ************************************ 00:22:04.753 END TEST keyring_file 00:22:04.753 ************************************ 00:22:04.753 00:22:04.753 real 0m15.425s 00:22:04.753 user 0m38.412s 00:22:04.753 sys 0m3.008s 00:22:04.753 22:52:20 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.753 22:52:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:04.753 22:52:20 -- common/autotest_common.sh@1142 -- # return 0 00:22:04.753 22:52:20 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:22:04.754 22:52:20 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:04.754 22:52:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:04.754 22:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.754 22:52:20 -- common/autotest_common.sh@10 -- # set +x 00:22:04.754 ************************************ 00:22:04.754 START TEST keyring_linux 00:22:04.754 ************************************ 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:04.754 * Looking for test storage... 00:22:04.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2358641-73b4-4563-bfad-61d761fbd8b0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e2358641-73b4-4563-bfad-61d761fbd8b0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.754 22:52:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.754 22:52:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.754 22:52:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.754 22:52:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.754 22:52:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.754 22:52:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.754 22:52:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:04.754 22:52:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:04.754 /tmp/:spdk-test:key0 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:04.754 22:52:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:04.754 /tmp/:spdk-test:key1 00:22:04.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.754 22:52:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85687 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:04.754 22:52:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85687 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85687 ']' 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.754 22:52:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:04.754 [2024-07-15 22:52:20.307770] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:22:04.754 [2024-07-15 22:52:20.308771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85687 ] 00:22:05.060 [2024-07-15 22:52:20.447310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.060 [2024-07-15 22:52:20.553377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.060 [2024-07-15 22:52:20.608285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:05.996 22:52:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:05.996 [2024-07-15 22:52:21.279553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.996 null0 00:22:05.996 [2024-07-15 22:52:21.311511] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.996 [2024-07-15 22:52:21.311878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.996 22:52:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:05.996 706711378 00:22:05.996 22:52:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:05.996 601129758 00:22:05.996 22:52:21 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:05.996 22:52:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85704 00:22:05.996 22:52:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85704 /var/tmp/bperf.sock 00:22:05.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85704 ']' 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.996 22:52:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:05.996 [2024-07-15 22:52:21.390656] Starting SPDK v24.09-pre git sha1 d608564df / DPDK 24.03.0 initialization... 00:22:05.996 [2024-07-15 22:52:21.390937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85704 ] 00:22:05.996 [2024-07-15 22:52:21.525151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.255 [2024-07-15 22:52:21.641398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.822 22:52:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.822 22:52:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:22:06.822 22:52:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:06.822 22:52:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:07.081 22:52:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:07.081 22:52:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:07.340 [2024-07-15 22:52:22.847945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:07.340 22:52:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:07.340 22:52:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:07.599 [2024-07-15 22:52:23.100139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.859 nvme0n1 00:22:07.859 22:52:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:07.859 22:52:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:07.859 22:52:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:07.859 22:52:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:07.859 22:52:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.859 22:52:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:08.120 22:52:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:08.120 22:52:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:08.120 22:52:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:08.120 22:52:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:08.120 22:52:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:08.120 22:52:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.120 22:52:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@25 -- # sn=706711378 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 706711378 == \7\0\6\7\1\1\3\7\8 ]] 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 706711378 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:08.378 22:52:23 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.378 Running I/O for 1 seconds... 00:22:09.314 00:22:09.314 Latency(us) 00:22:09.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.314 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:09.314 nvme0n1 : 1.01 12961.92 50.63 0.00 0.00 9815.57 7864.32 17873.45 00:22:09.314 =================================================================================================================== 00:22:09.314 Total : 12961.92 50.63 0.00 0.00 9815.57 7864.32 17873.45 00:22:09.314 0 00:22:09.314 22:52:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:09.314 22:52:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:09.572 22:52:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:09.572 22:52:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:09.572 22:52:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:09.572 22:52:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:09.572 22:52:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.572 22:52:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:09.830 22:52:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:09.830 22:52:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:09.830 22:52:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:09.830 22:52:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.830 22:52:25 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:09.830 22:52:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:10.088 [2024-07-15 22:52:25.635936] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:10.088 [2024-07-15 22:52:25.636102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae1230 (107): Transport endpoint is not connected 00:22:10.088 [2024-07-15 22:52:25.637093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae1230 (9): Bad file descriptor 00:22:10.088 [2024-07-15 22:52:25.638089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:10.088 [2024-07-15 22:52:25.638111] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:10.088 [2024-07-15 22:52:25.638122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:10.088 request: 00:22:10.088 { 00:22:10.088 "name": "nvme0", 00:22:10.088 "trtype": "tcp", 00:22:10.088 "traddr": "127.0.0.1", 00:22:10.088 "adrfam": "ipv4", 00:22:10.088 "trsvcid": "4420", 00:22:10.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.088 "prchk_reftag": false, 00:22:10.088 "prchk_guard": false, 00:22:10.088 "hdgst": false, 00:22:10.088 "ddgst": false, 00:22:10.088 "psk": ":spdk-test:key1", 00:22:10.088 "method": "bdev_nvme_attach_controller", 00:22:10.088 "req_id": 1 00:22:10.088 } 00:22:10.088 Got JSON-RPC error response 00:22:10.088 response: 00:22:10.088 { 00:22:10.088 "code": -5, 00:22:10.088 "message": "Input/output error" 00:22:10.088 } 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@33 -- # sn=706711378 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 706711378 00:22:10.347 1 links removed 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@33 -- # sn=601129758 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 601129758 00:22:10.347 1 links removed 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85704 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85704 ']' 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85704 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85704 00:22:10.347 killing process with pid 85704 00:22:10.347 Received shutdown signal, test time was about 1.000000 seconds 00:22:10.347 00:22:10.347 Latency(us) 00:22:10.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.347 =================================================================================================================== 00:22:10.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85704' 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@967 -- # kill 85704 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@972 -- # wait 85704 00:22:10.347 22:52:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85687 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85687 ']' 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85687 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:10.347 22:52:25 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.605 22:52:25 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85687 00:22:10.605 killing process with pid 85687 00:22:10.605 22:52:25 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.605 22:52:25 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.605 22:52:25 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85687' 00:22:10.605 22:52:25 keyring_linux -- common/autotest_common.sh@967 -- # kill 85687 00:22:10.605 22:52:25 keyring_linux -- common/autotest_common.sh@972 -- # wait 85687 00:22:10.864 ************************************ 00:22:10.865 END TEST keyring_linux 00:22:10.865 ************************************ 00:22:10.865 00:22:10.865 real 0m6.287s 00:22:10.865 user 0m12.226s 00:22:10.865 sys 0m1.512s 00:22:10.865 22:52:26 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:10.865 22:52:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:10.865 22:52:26 -- common/autotest_common.sh@1142 -- # return 0 00:22:10.865 22:52:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:10.865 22:52:26 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:10.865 22:52:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:10.865 22:52:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:10.865 22:52:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:10.865 22:52:26 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:10.865 22:52:26 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:10.865 22:52:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:10.865 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:22:10.865 22:52:26 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:10.865 22:52:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:10.865 22:52:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:10.865 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.766 INFO: APP EXITING 00:22:12.766 INFO: killing all VMs 00:22:12.766 INFO: killing vhost app 00:22:12.766 INFO: EXIT DONE 00:22:13.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:13.024 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:13.024 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:13.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:13.861 Cleaning 00:22:13.861 Removing: /var/run/dpdk/spdk0/config 00:22:13.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:13.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:13.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:13.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:13.861 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:13.861 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:13.861 Removing: /var/run/dpdk/spdk1/config 00:22:13.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:13.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:13.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:13.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:13.861 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:13.861 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:13.861 Removing: /var/run/dpdk/spdk2/config 00:22:13.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:13.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:13.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:13.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:13.861 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:13.861 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:13.861 Removing: /var/run/dpdk/spdk3/config 00:22:13.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:13.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:13.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:13.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:13.861 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:13.861 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:13.861 Removing: /var/run/dpdk/spdk4/config 00:22:13.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:13.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:13.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:13.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:13.861 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:13.861 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:13.861 Removing: /dev/shm/nvmf_trace.0 00:22:13.861 Removing: /dev/shm/spdk_tgt_trace.pid58689 00:22:13.861 Removing: /var/run/dpdk/spdk0 00:22:13.861 Removing: /var/run/dpdk/spdk1 00:22:13.861 Removing: /var/run/dpdk/spdk2 00:22:13.861 Removing: /var/run/dpdk/spdk3 00:22:13.861 Removing: /var/run/dpdk/spdk4 00:22:13.861 Removing: /var/run/dpdk/spdk_pid58538 00:22:13.861 Removing: /var/run/dpdk/spdk_pid58689 00:22:13.861 Removing: /var/run/dpdk/spdk_pid58881 00:22:13.861 Removing: /var/run/dpdk/spdk_pid58968 00:22:13.861 Removing: /var/run/dpdk/spdk_pid58997 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59105 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59123 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59241 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59432 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59572 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59643 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59719 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59810 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59887 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59920 00:22:13.861 Removing: /var/run/dpdk/spdk_pid59956 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60017 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60117 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60544 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60596 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60647 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60663 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60730 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60746 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60813 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60829 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60880 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60898 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60938 00:22:13.861 Removing: /var/run/dpdk/spdk_pid60956 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61083 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61114 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61189 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61240 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61265 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61329 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61362 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61398 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61427 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61467 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61496 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61536 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61565 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61605 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61634 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61674 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61703 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61743 00:22:13.861 Removing: /var/run/dpdk/spdk_pid61772 00:22:14.119 Removing: /var/run/dpdk/spdk_pid61814 00:22:14.119 Removing: /var/run/dpdk/spdk_pid61843 00:22:14.119 Removing: /var/run/dpdk/spdk_pid61884 00:22:14.119 Removing: /var/run/dpdk/spdk_pid61916 00:22:14.119 Removing: /var/run/dpdk/spdk_pid61959 00:22:14.119 Removing: /var/run/dpdk/spdk_pid61988 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62029 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62096 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62189 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62497 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62509 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62545 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62559 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62580 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62599 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62618 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62639 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62658 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62677 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62687 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62712 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62725 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62746 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62765 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62784 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62794 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62819 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62832 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62853 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62889 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62897 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62932 00:22:14.119 Removing: /var/run/dpdk/spdk_pid62996 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63020 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63034 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63063 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63072 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63085 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63130 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63143 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63172 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63181 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63196 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63206 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63217 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63227 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63236 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63250 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63280 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63306 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63316 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63350 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63359 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63367 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63413 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63419 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63451 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63458 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63466 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63479 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63481 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63494 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63507 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63509 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63583 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63636 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63741 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63774 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63819 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63839 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63856 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63876 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63907 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63928 00:22:14.119 Removing: /var/run/dpdk/spdk_pid63997 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64020 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64064 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64145 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64201 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64230 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64327 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64370 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64408 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64626 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64724 00:22:14.119 Removing: /var/run/dpdk/spdk_pid64752 00:22:14.119 Removing: /var/run/dpdk/spdk_pid65081 00:22:14.119 Removing: /var/run/dpdk/spdk_pid65119 00:22:14.119 Removing: /var/run/dpdk/spdk_pid65408 00:22:14.119 Removing: /var/run/dpdk/spdk_pid65818 00:22:14.377 Removing: /var/run/dpdk/spdk_pid66104 00:22:14.377 Removing: /var/run/dpdk/spdk_pid66885 00:22:14.377 Removing: /var/run/dpdk/spdk_pid67705 00:22:14.377 Removing: /var/run/dpdk/spdk_pid67821 00:22:14.377 Removing: /var/run/dpdk/spdk_pid67889 00:22:14.377 Removing: /var/run/dpdk/spdk_pid69142 00:22:14.377 Removing: /var/run/dpdk/spdk_pid69356 00:22:14.377 Removing: /var/run/dpdk/spdk_pid72744 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73055 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73165 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73299 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73326 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73354 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73376 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73475 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73608 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73765 00:22:14.377 Removing: /var/run/dpdk/spdk_pid73840 00:22:14.377 Removing: /var/run/dpdk/spdk_pid74029 00:22:14.377 Removing: /var/run/dpdk/spdk_pid74118 00:22:14.377 Removing: /var/run/dpdk/spdk_pid74205 00:22:14.377 Removing: /var/run/dpdk/spdk_pid74522 00:22:14.377 Removing: /var/run/dpdk/spdk_pid74897 00:22:14.377 Removing: /var/run/dpdk/spdk_pid74905 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75175 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75195 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75213 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75245 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75250 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75552 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75599 00:22:14.377 Removing: /var/run/dpdk/spdk_pid75872 00:22:14.377 Removing: /var/run/dpdk/spdk_pid76074 00:22:14.377 Removing: /var/run/dpdk/spdk_pid76459 00:22:14.377 Removing: /var/run/dpdk/spdk_pid76967 00:22:14.377 Removing: /var/run/dpdk/spdk_pid77787 00:22:14.377 Removing: /var/run/dpdk/spdk_pid78367 00:22:14.377 Removing: /var/run/dpdk/spdk_pid78375 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80277 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80343 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80402 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80458 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80579 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80638 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80694 00:22:14.377 Removing: /var/run/dpdk/spdk_pid80753 00:22:14.377 Removing: /var/run/dpdk/spdk_pid81069 00:22:14.377 Removing: /var/run/dpdk/spdk_pid82227 00:22:14.377 Removing: /var/run/dpdk/spdk_pid82368 00:22:14.377 Removing: /var/run/dpdk/spdk_pid82611 00:22:14.377 Removing: /var/run/dpdk/spdk_pid83150 00:22:14.377 Removing: /var/run/dpdk/spdk_pid83309 00:22:14.377 Removing: /var/run/dpdk/spdk_pid83470 00:22:14.377 Removing: /var/run/dpdk/spdk_pid83563 00:22:14.377 Removing: /var/run/dpdk/spdk_pid83730 00:22:14.377 Removing: /var/run/dpdk/spdk_pid83839 00:22:14.377 Removing: /var/run/dpdk/spdk_pid84489 00:22:14.377 Removing: /var/run/dpdk/spdk_pid84525 00:22:14.377 Removing: /var/run/dpdk/spdk_pid84562 00:22:14.377 Removing: /var/run/dpdk/spdk_pid84813 00:22:14.377 Removing: /var/run/dpdk/spdk_pid84851 00:22:14.377 Removing: /var/run/dpdk/spdk_pid84881 00:22:14.377 Removing: /var/run/dpdk/spdk_pid85308 00:22:14.377 Removing: /var/run/dpdk/spdk_pid85325 00:22:14.377 Removing: /var/run/dpdk/spdk_pid85569 00:22:14.377 Removing: /var/run/dpdk/spdk_pid85687 00:22:14.377 Removing: /var/run/dpdk/spdk_pid85704 00:22:14.377 Clean 00:22:14.378 22:52:29 -- common/autotest_common.sh@1451 -- # return 0 00:22:14.378 22:52:29 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:14.378 22:52:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:14.378 22:52:29 -- common/autotest_common.sh@10 -- # set +x 00:22:14.636 22:52:29 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:14.636 22:52:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:14.636 22:52:29 -- common/autotest_common.sh@10 -- # set +x 00:22:14.636 22:52:30 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:14.636 22:52:30 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:14.636 22:52:30 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:14.636 22:52:30 -- spdk/autotest.sh@391 -- # hash lcov 00:22:14.636 22:52:30 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:14.636 22:52:30 -- spdk/autotest.sh@393 -- # hostname 00:22:14.636 22:52:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:14.893 geninfo: WARNING: invalid characters removed from testname! 00:22:41.428 22:52:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.953 22:52:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:46.479 22:53:01 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:49.072 22:53:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:51.598 22:53:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:54.124 22:53:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:56.659 22:53:12 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:56.659 22:53:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.659 22:53:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:56.659 22:53:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.659 22:53:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.659 22:53:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.659 22:53:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.659 22:53:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.659 22:53:12 -- paths/export.sh@5 -- $ export PATH 00:22:56.659 22:53:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.659 22:53:12 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:56.659 22:53:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:56.659 22:53:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721083992.XXXXXX 00:22:56.659 22:53:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721083992.Y5kg2d 00:22:56.659 22:53:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:56.659 22:53:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:56.659 22:53:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:56.659 22:53:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:56.659 22:53:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:56.659 22:53:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:56.659 22:53:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:56.659 22:53:12 -- common/autotest_common.sh@10 -- $ set +x 00:22:56.659 22:53:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:56.659 22:53:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:56.659 22:53:12 -- pm/common@17 -- $ local monitor 00:22:56.659 22:53:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:56.659 22:53:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:56.659 22:53:12 -- pm/common@25 -- $ sleep 1 00:22:56.659 22:53:12 -- pm/common@21 -- $ date +%s 00:22:56.659 22:53:12 -- pm/common@21 -- $ date +%s 00:22:56.659 22:53:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721083992 00:22:56.659 22:53:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721083992 00:22:56.659 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721083992_collect-vmstat.pm.log 00:22:56.659 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721083992_collect-cpu-load.pm.log 00:22:57.591 22:53:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:57.591 22:53:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:57.591 22:53:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:57.591 22:53:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:57.591 22:53:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:57.591 22:53:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:57.591 22:53:13 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:57.591 22:53:13 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:57.591 22:53:13 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:57.871 22:53:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:57.871 22:53:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:57.871 22:53:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:57.871 22:53:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:57.871 22:53:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:57.871 22:53:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:57.871 22:53:13 -- pm/common@44 -- $ pid=87403 00:22:57.871 22:53:13 -- pm/common@50 -- $ kill -TERM 87403 00:22:57.871 22:53:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:57.871 22:53:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:57.871 22:53:13 -- pm/common@44 -- $ pid=87404 00:22:57.871 22:53:13 -- pm/common@50 -- $ kill -TERM 87404 00:22:57.871 + [[ -n 5099 ]] 00:22:57.871 + sudo kill 5099 00:22:57.877 [Pipeline] } 00:22:57.892 [Pipeline] // timeout 00:22:57.896 [Pipeline] } 00:22:57.906 [Pipeline] // stage 00:22:57.909 [Pipeline] } 00:22:57.919 [Pipeline] // catchError 00:22:57.924 [Pipeline] stage 00:22:57.925 [Pipeline] { (Stop VM) 00:22:57.933 [Pipeline] sh 00:22:58.204 + vagrant halt 00:23:01.487 ==> default: Halting domain... 00:23:08.101 [Pipeline] sh 00:23:08.385 + vagrant destroy -f 00:23:12.568 ==> default: Removing domain... 00:23:12.584 [Pipeline] sh 00:23:12.866 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:23:12.875 [Pipeline] } 00:23:12.894 [Pipeline] // stage 00:23:12.900 [Pipeline] } 00:23:12.918 [Pipeline] // dir 00:23:12.924 [Pipeline] } 00:23:12.943 [Pipeline] // wrap 00:23:12.949 [Pipeline] } 00:23:12.966 [Pipeline] // catchError 00:23:12.991 [Pipeline] stage 00:23:12.994 [Pipeline] { (Epilogue) 00:23:13.012 [Pipeline] sh 00:23:13.295 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:19.868 [Pipeline] catchError 00:23:19.870 [Pipeline] { 00:23:19.884 [Pipeline] sh 00:23:20.165 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:20.165 Artifacts sizes are good 00:23:20.173 [Pipeline] } 00:23:20.194 [Pipeline] // catchError 00:23:20.206 [Pipeline] archiveArtifacts 00:23:20.213 Archiving artifacts 00:23:20.370 [Pipeline] cleanWs 00:23:20.393 [WS-CLEANUP] Deleting project workspace... 00:23:20.393 [WS-CLEANUP] Deferred wipeout is used... 00:23:20.410 [WS-CLEANUP] done 00:23:20.412 [Pipeline] } 00:23:20.435 [Pipeline] // stage 00:23:20.443 [Pipeline] } 00:23:20.463 [Pipeline] // node 00:23:20.468 [Pipeline] End of Pipeline 00:23:20.503 Finished: SUCCESS